Analyzing data owned by several parties while achieving a good trade-off
between utility and privacy is a key challenge in federated learning and
analytics. In this work, we introduce a novel relaxation of local differential
privacy (LDP) that naturally arises in fully decentralized protocols, i.e.,
when participants exchange information by communicating along the edges of a
network graph. This relaxation, that we call network DP, captures the fact that
users have only a local view of the decentralized system. To show the relevance
of network DP, we study a decentralized model of computation where a token
performs a walk on the network graph and is updated sequentially by the party
who receives it. For tasks such as real summation, histogram computation and
optimization with gradient descent, we propose simple algorithms on ring and
complete topologies. We prove that the privacy-utility trade-offs of our
algorithms significantly improve upon LDP, and in some cases even match what
can be achieved with methods based on trusted/secure aggregation and shuffling.
Our experiments illustrate the superior utility of our approach when training a
machine learning model with stochastic gradient descent.

Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/cs/1/au:+Cyffers_E/0/1/0/all/0/1">Edwige Cyffers</a>, <a href="http://arxiv.org/find/cs/1/au:+Bellet_A/0/1/0/all/0/1">Aur&#xe9;lien Bellet</a>

By admin