As modern neural machine translation (NMT) systems have been widely deployed,
their security vulnerabilities require close scrutiny. Most recently, NMT
systems have been found vulnerable to targeted attacks which cause them to
produce specific, unsolicited, and even harmful translations. These attacks are
usually exploited in a white-box setting, where adversarial inputs causing
targeted translations are discovered for a known target system. However, this
approach is less viable when the target system is black-box and unknown to the
adversary (e.g., secured commercial systems). In this paper, we show that
targeted attacks on black-box NMT systems are feasible, based on poisoning a
small fraction of their parallel training data. We show that this attack can be
realised practically via targeted corruption of web documents crawled to form
the system’s training data. We then analyse the effectiveness of the targeted
poisoning in two common NMT training scenarios: the from-scratch training and
the pre-train & fine-tune paradigm. Our results are alarming: even on the
state-of-the-art systems trained with massive parallel data (tens of millions),
the attacks are still successful (over 50% success rate) under surprisingly low
poisoning budgets (e.g., 0.006%). Lastly, we discuss potential defences to
counter such attacks.

Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/cs/1/au:+Xu_C/0/1/0/all/0/1">Chang Xu</a>, <a href="http://arxiv.org/find/cs/1/au:+Wang_J/0/1/0/all/0/1">Jun Wang</a>, <a href="http://arxiv.org/find/cs/1/au:+Tang_Y/0/1/0/all/0/1">Yuqing Tang</a>, <a href="http://arxiv.org/find/cs/1/au:+Guzman_F/0/1/0/all/0/1">Francisco Guzman</a>, <a href="http://arxiv.org/find/cs/1/au:+Rubinstein_B/0/1/0/all/0/1">Benjamin I. P. Rubinstein</a>, <a href="http://arxiv.org/find/cs/1/au:+Cohn_T/0/1/0/all/0/1">Trevor Cohn</a>

By admin