A Simple and Yet Fairly Effective Defense for Graph Neural Networks

Published: 20 Jun 2023, Last Modified: 07 Aug 2023AdvML-Frontiers 2023EveryoneRevisionsBibTeX
Keywords: Graph Neural Networks, Graph Neural Network Robustness, Node classification
TL;DR: We introduce NoisyGCN, a defense method for Graph Convolutional Neural Networks GCNs, based on injecting noise into the model's architecture effectively enhancing its robustness against adversarial attacks while maintaining a minimal time complexity.
Abstract: Graph neural networks (GNNs) have become the standard approach for performing machine learning on graphs. However, concerns have been raised regarding their vulnerability to small adversarial perturbations. Existing defense methods suffer from high time complexity and can negatively impact the model's performance on clean graphs. In this paper, we propose NoisyGCN, a defense method that injects noise into the GCN architecture. We derive a mathematical upper bound linking GCN's robustness to noise injection, establishing our method's effectiveness. Through empirical evaluations on the node classification task, we demonstrate superior or comparable performance to existing methods while minimizing the added time complexity.
Supplementary Material: zip
Submission Number: 59
Loading