Membership Inference Attacks Against Robust Graph Neural Network

Published: 01 Jan 2022, Last Modified: 17 Apr 2025CSS 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: With the rapid development of neural network technologies in machine learning, neural networks are widely used in artificial intelligence tasks. Due to the widespread existence of graph data, graph neural networks, a kind of neural network specializing in processing graph data, has become a research hotspot. This paper firstly studies the relationship between adversarial attacks and privacy attacks on graphs, i.e., whether a robust model trained on graph adversarial can improve the attack effect of graph membership inference attacks. We also find the different performance of the robust model’s loss function on the training set and the test set is a critical reason for the increasing membership inference attack success rate. Extensive experimental evaluations on Cora, Cora-ml, Citeseer, Polblogs and Pubmed demonstrate that the robust model obtained by adversarial training can significantly improve the attack success rate of membership inference attacks.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview