Federated learning (FL) is a training paradigm where the clients
collaboratively learn models by repeatedly sharing information without
compromising much on the privacy of their local sensitive data. In this paper,
we introduce federated $f$-differential privacy, a new notion specifically
tailored to the federated setting, based on the framework of Gaussian
differential privacy. Federated $f$-differential privacy operates on record
level: it provides the privacy guarantee on each individual record of one
client’s data against adversaries. We then propose a generic private federated
learning framework {PriFedSync} that accommodates a large family of
state-of-the-art FL algorithms, which provably achieves federated
$f$-differential privacy. Finally, we empirically demonstrate the trade-off
between privacy guarantee and prediction performance for models trained by
{PriFedSync} in computer vision tasks.

Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/stat/1/au:+Zheng_Q/0/1/0/all/0/1">Qinqing Zheng</a>, <a href="http://arxiv.org/find/stat/1/au:+Chen_S/0/1/0/all/0/1">Shuxiao Chen</a>, <a href="http://arxiv.org/find/stat/1/au:+Long_Q/0/1/0/all/0/1">Qi Long</a>, <a href="http://arxiv.org/find/stat/1/au:+Su_W/0/1/0/all/0/1">Weijie J. Su</a>

By admin