Adversarial Machine Learning has emerged as a substantial subfield of
Computer Science due to a lack of robustness in the models we train along with
crowdsourcing practices that enable attackers to tamper with data. In the last
two years, interest has surged in adversarial attacks on graphs yet the Graph
Classification setting remains nearly untouched. Since a Graph Classification
dataset consists of discrete graphs with class labels, related work has forgone
direct gradient optimization in favor of an indirect Reinforcement Learning
approach. We will study the novel problem of Data Poisoning (training time)
attack on Neural Networks for Graph Classification using Reinforcement Learning
Agents.

Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/cs/1/au:+Dineen_J/0/1/0/all/0/1">Jacob Dineen</a>, <a href="http://arxiv.org/find/cs/1/au:+Haque_A/0/1/0/all/0/1">A S M Ahsan-Ul Haque</a>, <a href="http://arxiv.org/find/cs/1/au:+Bielskas_M/0/1/0/all/0/1">Matthew Bielskas</a>

By admin