Large-Scale Adversarial Attacks on Graph Neural Networks via Graph CoarseningDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Graph Neural Network, Adversarial Attacks, Graph Coarsening
Abstract: Graph Neural Networks (GNNs) are fragile to adversarial attacks. However, existing state-of-the-art adversarial attack methods against GNNs are typically constrained by the graph's scale, failing to attack large graphs effectively. In this paper, we propose a novel attack method that attacks the graph in a divide-and-conquer manner to tackle large-scale adversarial attacks on GNNs. Specifically, the nodes are clustered based on node embeddings, coarsened graphs are constructed using the node clusters, and attacks are conducted on the coarsened graphs. Perturbations are selected starting with smaller coarsened graphs and progressing to larger detailed graphs while most of the irrelative nodes remain clustered, significantly reducing the complexity of generating adversarial graphs. Extensive empirical results show that the proposed method can greatly save the computational resources required to attack GNNs on large graphs while maintaining comparable performance on small graphs.
One-sentence Summary: Generating adversarial attacks on graphs in a divide-and-conquer manner.
Supplementary Material: zip
7 Replies

Loading