Data poisoning and backdoor attacks manipulate training data to induce
security breaches in a victim model. These attacks can be provably deflected
using differentially private (DP) training methods, although this comes with a
sharp decrease in model performance. The InstaHide method has recently been
proposed as an alternative to DP training that leverages supposed privacy
properties of the mixup augmentation, although without rigorous guarantees. In
this work, we show that strong data augmentations, such as mixup and random
additive noise, nullify poison attacks while enduring only a small accuracy
trade-off. To explain these finding, we propose a training method,
DP-InstaHide, which combines the mixup regularizer with additive noise. A
rigorous analysis of DP-InstaHide shows that mixup does indeed have privacy
advantages, and that training with k-way mixup provably yields at least k times
stronger DP guarantees than a naive DP mechanism. Because mixup (as opposed to
noise) is beneficial to model performance, DP-InstaHide provides a mechanism
for achieving stronger empirical performance against poisoning attacks than
other known DP methods.

Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/cs/1/au:+Borgnia_E/0/1/0/all/0/1">Eitan Borgnia</a>, <a href="http://arxiv.org/find/cs/1/au:+Geiping_J/0/1/0/all/0/1">Jonas Geiping</a>, <a href="http://arxiv.org/find/cs/1/au:+Cherepanova_V/0/1/0/all/0/1">Valeriia Cherepanova</a>, <a href="http://arxiv.org/find/cs/1/au:+Fowl_L/0/1/0/all/0/1">Liam Fowl</a>, <a href="http://arxiv.org/find/cs/1/au:+Gupta_A/0/1/0/all/0/1">Arjun Gupta</a>, <a href="http://arxiv.org/find/cs/1/au:+Ghiasi_A/0/1/0/all/0/1">Amin Ghiasi</a>, <a href="http://arxiv.org/find/cs/1/au:+Huang_F/0/1/0/all/0/1">Furong Huang</a>, <a href="http://arxiv.org/find/cs/1/au:+Goldblum_M/0/1/0/all/0/1">Micah Goldblum</a>, <a href="http://arxiv.org/find/cs/1/au:+Goldstein_T/0/1/0/all/0/1">Tom Goldstein</a>

By admin