Node-Level Membership Inference Attacks Against Graph Neural NetworksDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Graph Neural Network, Membership Inference Attack
TL;DR: We perform the first comprehensive analysis of node-level membership inference attacks against GNNs.
Abstract: Many real-world data are graphs, such as social networks and protein structures. To fully utilize the information contained in graph data, graph neural networks (GNNs) have been introduced. Previous studies have shown that machine learning models are vulnerable to privacy attacks. However, most of the current efforts concentrate on ML models trained on images and texts. On the other hand, privacy risks stemming from GNNs remain largely unstudied. In this paper, we fill the gap by performing the first comprehensive analysis of node-level membership inference attacks against GNNs. We systematically define the threat models and propose eight node-level membership inference attacks based on an adversary's background knowledge. Our evaluation on four GNN structures and four benchmark datasets shows that GNNs are vulnerable to node-level membership inference even when the adversary has minimal background knowledge. Besides, we show that node degree, graph density, and feature similarity have major impacts on the attack's success. We further investigate three defense mechanisms and show that differential privacy (DP) can better protect the membership privacy while preserving the model's utility.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Social Aspects of Machine Learning (eg, AI safety, fairness, privacy, interpretability, human-AI interaction, ethics)
5 Replies

Loading