GNN Predictions on k-hop Egonets Boosts Adversarial Robustness

Published: 28 Oct 2023, Last Modified: 21 Dec 2023NeurIPS 2023 GLFrontiers Workshop PosterEveryoneRevisionsBibTeX
Keywords: adversarial robustness, k-hop subgraphs
TL;DR: GNN Predictions on k-hop Egonets Boosts Adversarial Robustness
Abstract: Like many other deep learning models, Graph Neural Networks (GNNs) have been shown to be susceptible to adversarial attacks, i.e., the addition of crafted imperceptible noise to input data changes the model predictions drastically. We propose a very simple method k-HOP-PURIFY which makes node predictions on a k-hop Egonet centered at the node instead of the entire graph boosts adversarial accuracies. This could be used both as i) a post-processing step after applying popular defenses or ii) as a standalone defense method which is comparable to many other competitors. The method is extremely lightweight and scalable (takes 4 lines of code to implement) unlike many other defense methods which are computationally expensive or rely on heuristics. We show performance gains through extensive experimentation across various types of attacks (poison/evasion, targetted/untargeted), perturbation rates, and defenses implemented in the DeepRobust Library.
Submission Number: 77
Loading