The paper develops a new adversarial attack against deep neural networks
(DNN), based on applying bio-inspired design to moving physical objects. To the
best of our knowledge, this is the first work to introduce physical attacks
with a moving object. Instead of following the dominating attack strategy in
the existing literature, i.e., to introduce minor perturbations to a digital
input or a stationary physical object, we show two new successful attack
strategies in this paper. We show by superimposing several patterns onto one
physical object, a DNN becomes confused and picks one of the patterns to assign
a class label. Our experiment with three flapping wing robots demonstrates the
possibility of developing an adversarial camouflage to cause a targeted mistake
by DNN. We also show certain motion can reduce the dependency among consecutive
frames in a video and make an object detector “blind”, i.e., not able to detect
an object exists in the video. Hence in a successful physical attack against
DNN, targeted motion against the system should also be considered.

Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/cs/1/au:+Xi_B/0/1/0/all/0/1">Bowei Xi</a>, <a href="http://arxiv.org/find/cs/1/au:+Chen_Y/0/1/0/all/0/1">Yujie Chen</a>, <a href="http://arxiv.org/find/cs/1/au:+Fei_F/0/1/0/all/0/1">Fan Fei</a>, <a href="http://arxiv.org/find/cs/1/au:+Tu_Z/0/1/0/all/0/1">Zhan Tu</a>, <a href="http://arxiv.org/find/cs/1/au:+Deng_X/0/1/0/all/0/1">Xinyan Deng</a>

By admin