Differentially Private Federated Learning (DPFL) is an emerging field with
many applications. Gradient averaging based DPFL methods require costly
communication rounds and hardly work with large-capacity models, due to the
explicit dimension dependence in its added noise. In this work, inspired by
knowledge transfer non-federated privacy learning from Papernot et al.(2017;
2018), we design two new DPFL schemes, by voting among the data labels returned
from each local model, instead of averaging the gradients, which avoids the
dimension dependence and significantly reduces the communication cost.
Theoretically, by applying secure multi-party computation, we could
exponentially amplify the (data-dependent) privacy guarantees when the margin
of the voting scores are large. Extensive experiments show that our approaches
significantly improve the privacy-utility trade-off over the state-of-the-arts
in DPFL.

Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/cs/1/au:+Zhu_Y/0/1/0/all/0/1">Yuqing Zhu</a>, <a href="http://arxiv.org/find/cs/1/au:+Yu_X/0/1/0/all/0/1">Xiang Yu</a>, <a href="http://arxiv.org/find/cs/1/au:+Tsai_Y/0/1/0/all/0/1">Yi-Hsuan Tsai</a>, <a href="http://arxiv.org/find/cs/1/au:+Pittaluga_F/0/1/0/all/0/1">Francesco Pittaluga</a>, <a href="http://arxiv.org/find/cs/1/au:+Faraki_M/0/1/0/all/0/1">Masoud Faraki</a>, <a href="http://arxiv.org/find/cs/1/au:+chandraker_M/0/1/0/all/0/1">Manmohan chandraker</a>, <a href="http://arxiv.org/find/cs/1/au:+Wang_Y/0/1/0/all/0/1">Yu-Xiang Wang</a>

By admin