Adversarial examples pose a threat to deep neural network models in a variety
of scenarios, from settings where the adversary has complete knowledge of the
model in a “white box” setting and to the opposite in a “black box” setting. In
this paper, we explore the use of output randomization as a defense against
attacks in both the black box and white box models and propose two defenses. In
the first defense, we propose output randomization at test time to thwart
finite difference attacks in black box settings. Since this type of attack
relies on repeated queries to the model to estimate gradients, we investigate
the use of randomization to thwart such adversaries from successfully creating
adversarial examples. We empirically show that this defense can limit the
success rate of a black box adversary using the Zeroth Order Optimization
attack to 0%. Secondly, we propose output randomization training as a defense
against white box adversaries. Unlike prior approaches that use randomization,
our defense does not require its use at test time, eliminating the Backward
Pass Differentiable Approximation attack, which was shown to be effective
against other randomization defenses. Additionally, this defense has low
overhead and is easily implemented, allowing it to be used together with other
defenses across various model architectures. We evaluate output randomization
training against the Projected Gradient Descent attacker and show that the
defense can reduce the PGD attack’s success rate down to 12% when using
cross-entropy loss.

Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/cs/1/au:+Park_D/0/1/0/all/0/1">Daniel Park</a>, <a href="http://arxiv.org/find/cs/1/au:+Khan_H/0/1/0/all/0/1">Haidar Khan</a>, <a href="http://arxiv.org/find/cs/1/au:+Khan_A/0/1/0/all/0/1">Azer Khan</a>, <a href="http://arxiv.org/find/cs/1/au:+Gittens_A/0/1/0/all/0/1">Alex Gittens</a>, <a href="http://arxiv.org/find/cs/1/au:+Yener_B/0/1/0/all/0/1">B&#xfc;lent Yener</a>

By admin