Gradient-based adversarial attacks on deep neural networks pose a serious
threat, since they can be deployed by adding imperceptible perturbations to the
test data of any network, and the risk they introduce cannot be assessed
through the network’s original training performance. Denoising and
dimensionality reduction are two distinct methods that have been independently
investigated to combat such attacks. While denoising offers the ability to
tailor the defense to the specific nature of the attack, dimensionality
reduction offers the advantage of potentially removing previously unseen
perturbations, along with reducing the training time of the network being
defended. We propose strategies to combine the advantages of these two defense
mechanisms. First, we propose the cascaded defense, which involves denoising
followed by dimensionality reduction. To reduce the training time of the
defense for a small trade-off in performance, we propose the hidden layer
defense, which involves feeding the output of the encoder of a denoising
autoencoder into the network. Further, we discuss how adaptive attacks against
these defenses could become significantly weak when an alternative defense is
used, or when no defense is used. In this light, we propose a new metric to
evaluate a defense which measures the sensitivity of the adaptive attack to
modifications in the defense. Finally, we present a guideline for building an
ordered repertoire of defenses, a.k.a. a defense infrastructure, that adjusts
to limited computational resources in presence of uncertainty about the attack
strategy.

Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/cs/1/au:+Mahfuz_R/0/1/0/all/0/1">Rehana Mahfuz</a>, <a href="http://arxiv.org/find/cs/1/au:+Sahay_R/0/1/0/all/0/1">Rajeev Sahay</a>, <a href="http://arxiv.org/find/cs/1/au:+Gamal_A/0/1/0/all/0/1">Aly El Gamal</a>

By admin