One intriguing property of deep neural networks (DNNs) is their inherent
vulnerability to backdoor attacks — a trojan model responds to
trigger-embedded inputs in a highly predictable manner while functioning
normally otherwise. Despite the plethora of prior work on DNNs for continuous
data (e.g., images), the vulnerability of graph neural networks (GNNs) for
discrete-structured data (e.g., graphs) is largely unexplored, which is highly
concerning given their increasing use in security-sensitive domains. To bridge
this gap, we present GTA, the first backdoor attack on GNNs. Compared with
prior work, GTA departs in significant ways: graph-oriented — it defines
triggers as specific subgraphs, including both topological structures and
descriptive features, entailing a large design spectrum for the adversary;
input-tailored — it dynamically adapts triggers to individual graphs, thereby
optimizing both attack effectiveness and evasiveness; downstream model-agnostic
— it can be readily launched without knowledge regarding downstream models or
fine-tuning strategies; and attack-extensible — it can be instantiated for
both transductive (e.g., node classification) and inductive (e.g., graph
classification) tasks, constituting severe threats for a range of
security-critical applications. Through extensive evaluation using benchmark
datasets and state-of-the-art models, we demonstrate the effectiveness of GTA.
We further provide analytical justification for its effectiveness and discuss
potential countermeasures, pointing to several promising research directions.

Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/cs/1/au:+Xi_Z/0/1/0/all/0/1">Zhaohan Xi</a>, <a href="http://arxiv.org/find/cs/1/au:+Pang_R/0/1/0/all/0/1">Ren Pang</a>, <a href="http://arxiv.org/find/cs/1/au:+Ji_S/0/1/0/all/0/1">Shouling Ji</a>, <a href="http://arxiv.org/find/cs/1/au:+Wang_T/0/1/0/all/0/1">Ting Wang</a>

By admin