Deep Neural Networks (DNNs) could be easily fooled by Adversarial Examples
(AEs) with an imperceptible difference to original ones in human eyes. Also,
the AEs from attacking one surrogate DNN tend to cheat other black-box DNNs as
well, i.e., the attack transferability. Existing works reveal that adopting
certain optimization algorithms in attack improves transferability, but the
underlying reasons have not been thoroughly studied. In this paper, we
investigate the impacts of optimization on attack transferability by
comprehensive experiments concerning 7 optimization algorithms, 4 surrogates,
and 9 black-box models. Through the thorough empirical analysis from three
perspectives, we surprisingly find that the varied transferability of AEs from
optimization algorithms is strongly related to the corresponding Root Mean
Square Error (RMSE) from their original samples. On such a basis, one could
simply approach high transferability by attacking until RMSE decreases, which
motives us to propose a LArge RMSE Attack (LARA). Although LARA significantly
improves transferability by 20%, it is insufficient to exploit the
vulnerability of DNNs, leading to a natural urge that the strength of all
attacks should be measured by both the widely used $ell_infty$ bound and the
RMSE addressed in this paper, so that tricky enhancement of transferability
would be avoided.

Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/cs/1/au:+Chen_S/0/1/0/all/0/1">Sizhe Chen</a>, <a href="http://arxiv.org/find/cs/1/au:+Tao_Q/0/1/0/all/0/1">Qinghua Tao</a>, <a href="http://arxiv.org/find/cs/1/au:+Ye_Z/0/1/0/all/0/1">Zhixing Ye</a>, <a href="http://arxiv.org/find/cs/1/au:+Huang_X/0/1/0/all/0/1">Xiaolin Huang</a>

By admin