TL;DR: We establish an adversarial generalization bound of general GNNs via covering number analysis.
Abstract: While Graph Neural Networks (GNNs) have shown outstanding performance in node classification tasks, they are vulnerable to adversarial attacks, which are imperceptible changes to input samples. Adversarial training, as a widely used tool to enhance the adversarial robustness of GNNs, has presented remarkable effectiveness in node classification tasks. However, the generalization properties for explaining their behaviors remain not well understood from the theoretical viewpoint. To fill this gap, we develop a high probability generalization bound of general GNNs in adversarial learning through covering number analysis. We estimate the covering number of the GNN model class based on the entire perturbed feature matrix by constructing a cover for the perturbation set. Our results are generally applicable to a series of GNNs. We demonstrate their applicability by investigating the generalization performance of several popular GNN models under adversarial attacks, which reveal the architecture-related factors influencing the generalization gap. Our experimental results on benchmark datasets provide evidence that supports the established theoretical findings.
Lay Summary: Graph Neural Networks (GNNs) are prone to being attacked by some deliberately crafted perturbations. To ensure GNNs' strong ability in processing the graph-structured data, adversarial training is adopted to defend against these attacks. However, we find that GNNs exhibit performance degradation in adversarial training, manifesting as a larger generalization gap. To investigate this problem, we utilize a classical technique, covering number, to measure the generalization properties of GNNs in adversarial training.
To be specific, we derive the upper bounds of the generalization gap for several GNN models to explain their generalization behaviors in adversarial training. Our theoretical results provide helpful insights into model construction and algorithm designs to improve the generalization ability of GNNs in adversarial training. We also demonstrate our findings in the experiments.
Primary Area: Theory->Learning Theory
Keywords: Adversarial Learning, Graph Neural Networks
Submission Number: 3516
Loading