Adversarial training is one of the most effective approaches defending
against adversarial examples for deep learning models. Unlike other defense
strategies, adversarial training aims to promote the robustness of models
intrinsically. During the last few years, adversarial training has been studied
and discussed from various aspects. A variety of improvements and developments
of adversarial training are proposed, but neglected in existing surveys. In
this survey, we systematically review the recent progress on adversarial
training with a novel taxonomy for the first time. Then we discuss the
generalization problems in adversarial training from three perspectives.
Finally, we highlight the challenges which are not fully solved and present
potential future directions.
Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/cs/1/au:+Bai_T/0/1/0/all/0/1">Tao Bai</a>, <a href="http://arxiv.org/find/cs/1/au:+Luo_J/0/1/0/all/0/1">Jinqi Luo</a>, <a href="http://arxiv.org/find/cs/1/au:+Zhao_J/0/1/0/all/0/1">Jun Zhao</a>, <a href="http://arxiv.org/find/cs/1/au:+Wen_B/0/1/0/all/0/1">Bihan Wen</a>, <a href="http://arxiv.org/find/cs/1/au:+Wang_Q/0/1/0/all/0/1">Qian Wang</a>