Rethinking Label Poisoning for GNNs: Pitfalls and Attacks

Published: 20 Jun 2023, Last Modified: 07 Aug 2023AdvML-Frontiers 2023EveryoneRevisionsBibTeX
Keywords: Label Poisoning Attacks, Graph Neural Networks, Robustness
Abstract: Node labels for graphs are usually generated using an automated process, or crowd-sourced from human users. This opens up avenues for malicious users to compromise the training labels, making it unwise to blindly rely on them. While robustness against noisy labels is an active area of research, there are only a handful of papers in the literature that address this for graph-based data. Even more so, the effects of adversarial label perturbations are sparsely studied. A recent work revealed that the entire literature on label poisoning for GNNs is plagued by serious evaluation pitfalls and showed how existing attacks render ineffective post fixing these shortcomings. In this work, we introduce two new simple yet effective attacks that are significantly stronger (up to $\sim8\%$) than the previous strongest attack. Our work demonstrates the need for more robust defense mechanisms, especially considering the \emph{transferability} of our attacks, where a strategy devised for one model can effectively contaminate numerous other models.
Supplementary Material: zip
Submission Number: 90
Loading