Long-distance Targeted Poisoning Attacks on Graph Neural Networks

21 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: learning on graphs and other geometries & topologies
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Graph Neural Network, Adversarial Attacks, Node Classification
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: GNNs are vulnerable to targeted poisoning in which an attacker manipulates the graph to cause a target node to be mis-classified to a label chosen by the attacker. However, most existing targeted attacks inject or modify nodes within the target node's $k$-hop neighborhood to poison a $k$-layer GNN model. In this paper, we investigate the feasibility of {\em long-distance} attacks, i.e., attacks where the injected nodes lie outside the target node's $k$-hop neighborhood. We show such attacks are feasible by developing a bilevel optimization-based approach, inspired by meta-learning. While this principled approach can successfully attack small graphs, scaling it to large graphs requires significant memory and computation resources, and is thus impractical. Therefore, we develop a much less expensive, but approximate, heuristic-based approach that can attack much larger graphs, albeit with lower attack success rate. Our evaluation shows that long-distance targeted poisoning is effective and difficult to detect by existing GNN defense mechanisms. To the best of our knowledge, our work is the first to study long-distance targeted poisoning attacks.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3789
Loading