CGP: Centroid-guided Graph Poisoning for Link Inference Attacks in Graph Neural Networks

Published: 01 Jan 2023, Last Modified: 13 Nov 2024IEEE Big Data 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Graph Neural Network (GNN) is the state-of-the-art machine learning model on graph data, which many modern big data applications rely on. However, GNN’s potential leakage of sensitive graph node relationships (i.e., links) could cause severe user privacy infringements. An attacker might infer the sensitive graph links from the posteriors of a GNN. Such attacks are named graph link inference attacks. While most existing research considers attack settings without malicious users, this work considers the setting where some malicious nodes are established by the attacker. This setting enables link inference without relying on the estimation of the number of links in the target graph, which significantly enhances the practicality of link inference attacks. This work further proposes centroid-guided graph poisoning (CGP). Without participating in the training process of the target model, CGP operates on links between malicious nodes to make the target model more vulnerable to graph link inference attacks. Experiment results in this work demonstrate that using less than 5% of malicious nodes, i.e. modifying approximately 0.25% of all links, CGP can increase the F-1 of graph link inference attacks by up to 4%.
Loading