Backdoor attacks represent a serious threat to neural network models. A
backdoored model will misclassify the trigger-embedded inputs into an
attacker-chosen target label while performing normally on other benign inputs.
There are already numerous works on backdoor attacks on neural networks, but
only a few works consider graph neural networks (GNNs). As such, there is no
intensive research on explaining the impact of trigger injecting position on
the performance of backdoor attacks on GNNs.

To bridge this gap, we conduct an experimental investigation on the
performance of backdoor attacks on GNNs. We apply two powerful GNN
explainability approaches to select the optimal trigger injecting position to
achieve two attacker objectives — high attack success rate and low clean
accuracy drop. Our empirical results on benchmark datasets and state-of-the-art
neural network models demonstrate the proposed method’s effectiveness in
selecting trigger injecting position for backdoor attacks on GNNs. For
instance, on the node classification task, the backdoor attack with trigger
injecting position selected by GraphLIME reaches over $84 %$ attack success
rate with less than $2.5 %$ accuracy drop

Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/cs/1/au:+Xu_J/0/1/0/all/0/1">Jing Xu</a>, <a href="http://arxiv.org/find/cs/1/au:+Minhui/0/1/0/all/0/1">Minhui</a> (Jason)Xue, <a href="http://arxiv.org/find/cs/1/au:+Picek_S/0/1/0/all/0/1">Stjepan Picek</a>

By admin