A Unified Optimization-Based Framework for Certifiably Robust and Fair Graph Neural Networks

Published: 2025, Last Modified: 05 Feb 2026IEEE Trans. Signal Process. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Graph Neural Networks (GNNs) have exhibited exceptional performance across diverse application domains by harnessing the inherent interconnectedness of data. Recent findings point towards instability of GNN under both feature and structure perturbations. The emergence of adversarial attacks targeting GNNs poses a substantial and pervasive threat, compromising their overall performance and learning capabilities. In this work, we first derive a theoretical bound on the global Lipschitz constant of GNN in the context of both feature and structure perturbations. Consequently, we propose a unifying approach, termed AdaLipGNN, for adversarial training of GNNs through an optimization framework which provides attack agnostic robustness. By seamlessly integrating graph denoising and network regularization, AdaLipGNN offers a comprehensive and versatile solution, extending its applicability and enabling robust regularization for diverse network architectures. Further, we develop a provably convergent iterative algorithm, leveraging block successive upper-bound minimization to learn robust and stable GNN hypothesis. Numerical results obtained from extensive experiments performed on real-world datasets clearly illustrate that the proposed AdaLipGNN outperforms other defence methods.
Loading