Revisiting Robustness in Graph Machine LearningDownload PDF

Published: 21 Nov 2022, Last Modified: 05 May 2023TSRML2022Readers: Everyone
Keywords: graph neural networks, adversarial robustness, label propagation, node-classification, stochastic block models, Bayes classifier, non-i.i.d. data, graph learning, graphs, robustness
TL;DR: GNNs suffer from over-robustness, that is robustness beyond the point of semantic change - with prevalent threat models in the graph domain including a large fraction of perturbed graphs violating the unchanged semantics assumption.
Abstract: Many works show that node-level predictions of Graph Neural Networks (GNNs) are unrobust to small, often termed adversarial, changes to the graph structure. However, because manual inspection of a graph is difficult, it is unclear if the studied perturbations always preserve a core assumption of adversarial examples: that of unchanged semantic content. To address this problem, we introduce a more principled notion of an adversarial graph, which is aware of semantic content change. Using Contextual Stochastic Block Models (CSBMs) and real-world graphs, our results uncover: $i)$ for a majority of nodes the prevalent perturbation models include a large fraction of perturbed graphs violating the unchanged semantics assumption; $ii)$ surprisingly, all assessed GNNs show over-robustness - that is robustness beyond the point of semantic change. We find this to be a complementary phenomenon to adversarial robustness related to the small degree of nodes and their class membership dependence on the neighbourhood structure.
3 Replies

Loading