Graph Convolutional Network Robustness Verification Algorithm Based on Dual Approximation

Published: 01 Jan 2024, Last Modified: 08 Mar 2025ICFEM 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: With the continuous development of Graph Neural Network (GNN) technologies, securing their robustness is crucial for their broad adoption in practical applications. Although various verification methods for training GNNs have been proposed, studies indicate that Graph Convolutional Networks (GCNs) remain vulnerable to adversarial attacks affecting both graph structure and node attributes. We propose a novel approach to verify the robustness of GCNs against perturbations in node attributes by employing a dual approximation technique to convexify nonlinear activation functions. This transformation changes the original non-convex problem into a more manageable convex forms. We start by applying linear relaxation to convert fixed-value features in each GCN layer into variables suitable for optimization. Next, we reframe the task of identifying the worst-case margin for a graph as a linear problem, which we solve using linear programming techniques. Given the discrete nature of graph data, we define a perturbation space that extends the data domain from discrete to continuous values. To improve the accuracy of the convex relaxation, we use a dual approximation algorithm to set bounds on the optimizable variables. Our method certifies the robustness of nodes against perturbations within a specified range and significantly improves verification accuracy compared to previous approaches. This method surpasses previous work in verification accuracy and is distinctively tailored to address the S-curve, an aspect less explored in prior research. Experimental results show that our method significantly refines the precision of robustness verification for GCNs.
Loading