Exploring Adversarial Robustness of Graph Neural Networks in Directed Graphs

23 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Supplementary Material: pdf
Primary Area: learning on graphs and other geometries & topologies
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Adversarial Robustness, Graph Neural Networks, Directed Graphs
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Existing research on robust Graph Neural Networks (GNNs) focuses predominantly on undirected graphs, neglecting the trustworthiness inherent in directed graphs. This work analyzes the limitations of existing approaches from both attack and defense perspectives, and we present an exploration of the adversarial robustness of GNNs in directed graphs. Specifically, we first introduce a new and more realistic directed graph attack setting to overcome the limitations of existing attacks. Then we propose a simple and effective message-passing framework as a plug-in layer to enhance the robustness of GNNs while avoiding a false sense of security. Our findings demonstrate that the profound trust implications offered by directed graphs can be harnessed to bolster the robustness and resilience of GNNs significantly. When coupled with existing defense strategies, this framework achieves outstanding clean accuracy and state-of-the-art robust performance against both transfer and adaptive attacks.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6767
Loading