Navigating Graph Robust Learning against All-Intensity Attacks

Published: 20 Jun 2023, Last Modified: 07 Aug 2023AdvML-Frontiers 2023EveryoneRevisionsBibTeX
Keywords: Graph neural network, Graph robust learning, Mixture-of-Experts
Abstract: Graph Neural Networks have demonstrated exceptional performance in a variety of graph learning tasks, but their vulnerability to adversarial attacks remains a major concern. Accordingly, many defense methods have been developed to learn robust graph representations and mitigate the impact of adversarial attacks. However, most of the existing methods suffer from two major drawbacks: {(i) their robustness degrades under higher-intensity attacks}, and {(ii) they cannot scale to large graphs.} In light of this, we develop a novel graph defense method to address these limitations. Our method first applies a denoising module to recover a cleaner graph by removing edges associated with attacked nodes, then, it utilizes Mixture-of-Experts to select differentially private noises of different magnitudes to counteract the node features attacked at different intensities. In addition, the overall design of our method avoids relying on heavy adjacency matrix computations like SVD, thus enabling the framework's applicability on large graphs.
Submission Number: 49
Loading