Pitfalls in Evaluating GNNs under Label Poisoning AttacksDownload PDF

Published: 04 Mar 2023, Last Modified: 01 Apr 2023ICLR 2023 Workshop on Trustworthy ML PosterReaders: Everyone
Keywords: Graph Neural Networks, Adversarial Attacks, Pitfalls
Abstract: Graph Neural Networks (GNNs) have shown impressive performance on several graph-based tasks. However, recent research on adversarial attacks shows how sensitive GNNs are to node/edge/label perturbations. Of particular interest is the label poisoning attack, where flipping an unnoticeable fraction of training labels can adversely affect GNNs' performance. While several such attacks were proposed, the latent flaws in the evaluation setup cloud the true effectiveness of the attacks. In this work, we uncover 5 frequent pitfalls in the evaluation setup that plague all existing label-poisoning attacks for GNNs. We observe for some settings that the state-of-the-art attacks are no better than a random label-flipping attack. We propose and advocate for a new evaluation setup that remedies the shortcomings, and can help gauge the potency of label-poisoning attacks fairly. Post remedying the pitfalls, on the Cora-ML dataset, we see a difference in performance of up to 19.37%.
0 Replies

Loading