Abstract: Graph Neural Networks (GNNs) have excelled across various domains, but recent studies show they are vulnerable to adversarial attacks. Among these, Node Injection Attacks (NIA) are a practical method, injecting malicious nodes to disrupt the model’s performance. In this paper, we focus on a more practical NIA scenario, where the attacker can only inject a small number of nodes to degrade the global performance of GNNs, and no information beyond the input features and adjacency relations is available to the attacker. We establish the relationship between resistance distance and graph connectivity and use it to guide the connections between injected and original nodes. To enhance attack effectiveness while reducing detectability, we replicate distant features from the original graph as the initial features of the injected nodes. Furthermore, the adjacency and feature matrices of the injected nodes are optimized in an unsupervised manner using contrastive learning. Based on these ideas, we propose the Resistance Distance Guided Node Injection Attack (RDGNIA). Experiments on three benchmark datasets demonstrate the superiority of our method compared to state-of-the-art approaches.
Loading