Federated learning enables clients to collaboratively learn a shared global
model without sharing their local training data with a cloud server. However,
malicious clients can corrupt the global model to predict incorrect labels for
testing examples. Existing defenses against malicious clients leverage
Byzantine-robust federated learning methods. However, these methods cannot
provably guarantee that the predicted label for a testing example is not
affected by malicious clients. We bridge this gap via ensemble federated
learning. In particular, given any base federated learning algorithm, we use
the algorithm to learn multiple global models, each of which is learnt using a
randomly selected subset of clients. When predicting the label of a testing
example, we take majority vote among the global models. We show that our
ensemble federated learning with any base federated learning algorithm is
provably secure against malicious clients. Specifically, the label predicted by
our ensemble global model for a testing example is provably not affected by a
bounded number of malicious clients. Moreover, we show that our derived bound
is tight. We evaluate our method on MNIST and Human Activity Recognition
datasets. For instance, our method can achieve a certified accuracy of 88% on
MNIST when 20 out of 1,000 clients are malicious.

Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/cs/1/au:+Cao_X/0/1/0/all/0/1">Xiaoyu Cao</a>, <a href="http://arxiv.org/find/cs/1/au:+Jia_J/0/1/0/all/0/1">Jinyuan Jia</a>, <a href="http://arxiv.org/find/cs/1/au:+Gong_N/0/1/0/all/0/1">Neil Zhenqiang Gong</a>

By admin