Recent work has demonstrated the vulnerability of modern text classifiers to
universal adversarial attacks, which are input-agnostic sequences of words
added to text processed by classifiers. Despite being successful, the word
sequences produced in such attacks are often ungrammatical and can be easily
distinguished from natural text. We develop adversarial attacks that appear
closer to natural English phrases and yet confuse classification systems when
added to benign inputs. We leverage an adversarially regularized autoencoder
(ARAE) to generate triggers and propose a gradient-based search that aims to
maximize the downstream classifier’s prediction loss. Our attacks effectively
reduce model accuracy on classification tasks while being less identifiable
than prior models as per automatic detection metrics and human-subject studies.
Our aim is to demonstrate that adversarial attacks can be made harder to detect
than previously thought and to enable the development of appropriate defenses.

Go to Source of this post
Author Of this post: <a href="">Liwei Song</a>, <a href="">Xinwei Yu</a>, <a href="">Hsuan-Tung Peng</a>, <a href="">Karthik Narasimhan</a>

By admin