PROSPECT: Learn MLPs Robust against Graph Adversarial Structure Attacks

16 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Supplementary Material: zip
Primary Area: societal considerations including fairness, safety, privacy
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: graph neural networks, adversarial robustness, graph knowledge distillation, graph heterophily
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: This paper presents the first online (and mutual) GNN-to-MLP distillation framework toward strong adversarial robustness, high clean accuracy, heterophily adaptability, and inference scalability.
Abstract: Current adversarial defense methods for GNNs exhibit critical limitations obstructing real-world application: 1) inadequate adaptability to graph heterophily, 2) absent generalizability to early GNNs like GraphSAGE used downstream, and 3) low inference scalability unacceptable for resource-constrained scenarios. To simultaneously address these challenges, we propose PROSPECT, the first online graph distillation multi-layer perceptron (GD-MLP) framework for learning GNNs and MLPs robust against adversarial structure attacks on both homophilous and heterophilous graphs. PROSPECT fits into GraphSAGE seamlessly and achieves inference scalability exponentially higher than conventional GNNs. Through decision boundary analysis, we formally prove the robustness of PROSPECT against successful adversarial attacks. Furthermore, by leveraging the Banach fixed-point theorem, we analyze the convergence condition of the MLP in PROSPECT and propose a quasi-alternating cosine annealing (QACA) learning rate scheduler, inspired by our convergence analysis and the alternating iterative turbo decoding from information theory. Experiments on five homophilous and three heterophilous graphs demonstrate the advantages of PROSPECT over current defense methods and offline GD-MLPs in adversarial robustness and clean accuracy, the inference scalability of PROSPECT orders of magnitude higher than existing defenders, and the effectiveness of QACA.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 634
Loading