Creators of machine learning models can use watermarking as a technique to
demonstrate their ownership if their models are stolen. Several recent
proposals watermark deep neural network (DNN) models using backdooring:
training them with additional mislabeled data. Backdooring requires full access
to the training data and control of the training process. This is feasible when
a single party trains the model in a centralized manner, but not in a federated
learning setting where the training process and training data are distributed
among several parties. In this paper, we introduce WAFFLE, the first approach
to watermark DNN models in federated learning. It introduces a re-training step
after each aggregation of local models into the global model. We show that
WAFFLE efficiently embeds a resilient watermark into models with a negligible
test accuracy degradation (-0.17%), and does not require access to the training
data. We introduce a novel technique to generate the backdoor used as a
watermark. It outperforms prior techniques, imposing no communication, and low
computational (+2.8%) overhead.

Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/cs/1/au:+Atli_B/0/1/0/all/0/1">Buse Gul Atli</a>, <a href="http://arxiv.org/find/cs/1/au:+Xia_Y/0/1/0/all/0/1">Yuxi Xia</a>, <a href="http://arxiv.org/find/cs/1/au:+Marchal_S/0/1/0/all/0/1">Samuel Marchal</a>, <a href="http://arxiv.org/find/cs/1/au:+Asokan_N/0/1/0/all/0/1">N. Asokan</a>

By admin