Recent work has demonstrated the vulnerability of modern text classifiers to
universal adversarial attacks, which are input-agnostic sequences of words
added to text processed by classifiers. Despite being successful, the word
sequences produced in such attacks are often ungrammatical and can be easily
distinguished from natural text. We develop adversarial attacks that appear
closer to natural English phrases and yet confuse classification systems when
added to benign inputs. We leverage an adversarially regularized autoencoder
(ARAE) to generate triggers and propose a gradient-based search that aims to
maximize the downstream classifier’s prediction loss. Our attacks effectively
reduce model accuracy on classification tasks while being less identifiable
than prior models as per automatic detection metrics and human-subject studies.
Our aim is to demonstrate that adversarial attacks can be made harder to detect
than previously thought and to enable the development of appropriate defenses.

Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/cs/1/au:+Song_L/0/1/0/all/0/1">Liwei Song</a>, <a href="http://arxiv.org/find/cs/1/au:+Yu_X/0/1/0/all/0/1">Xinwei Yu</a>, <a href="http://arxiv.org/find/cs/1/au:+Peng_H/0/1/0/all/0/1">Hsuan-Tung Peng</a>, <a href="http://arxiv.org/find/cs/1/au:+Narasimhan_K/0/1/0/all/0/1">Karthik Narasimhan</a>

By admin