Adversarial Training for Graph Neural Networks: Pitfalls, Solutions, and New Directions

Published: 21 Sept 2023, Last Modified: 02 Nov 2023NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: adversarial training, adversarial examples, robust graph learning, graph machine learning, graph neural networks, graphs
TL;DR: We highlight and overcome fundamental limitations and pitfalls in adversarial training against graph structure perturbations and propose a new global attack aware of node-level perturbations constraints.
Abstract: Despite its success in the image domain, adversarial training did not (yet) stand out as an effective defense for Graph Neural Networks (GNNs) against graph structure perturbations. In the pursuit of fixing adversarial training (1) we show and overcome fundamental theoretical as well as practical limitations of the adopted graph learning setting in prior work; (2) we reveal that flexible GNNs based on learnable graph diffusion are able to adjust to adversarial perturbations, while the learned message passing scheme is naturally interpretable; (3) we introduce the first attack for structure perturbations that, while targeting multiple nodes at once, is capable of handling global (graph-level) as well as local (node-level) constraints. Including these contributions, we demonstrate that adversarial training is a state-of-the-art defense against adversarial structure perturbations.
Submission Number: 3543
Loading