We provide a comprehensive overview of adversarial machine learning focusing
on two application domains, i.e., cybersecurity and computer vision. Research
in adversarial machine learning addresses a significant threat to the wide
application of machine learning techniques — they are vulnerable to carefully
crafted attacks from malicious adversaries. For example, deep neural networks
fail to correctly classify adversarial images, which are generated by adding
imperceptible perturbations to clean images.We first discuss three main
categories of attacks against machine learning techniques — poisoning attacks,
evasion attacks, and privacy attacks. Then the corresponding defense approaches
are introduced along with the weakness and limitations of the existing defense
approaches. We notice adversarial samples in cybersecurity and computer vision
are fundamentally different. While adversarial samples in cybersecurity often
have different properties/distributions compared with training data,
adversarial images in computer vision are created with minor input
perturbations. This further complicates the development of robust learning
techniques, because a robust learning technique must withstand different types
of attacks.

Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/cs/1/au:+Xi_B/0/1/0/all/0/1">Bowei Xi</a>

By admin