Releasing Graph Neural Networks with Differential Privacy Guarantees

Published: 26 Jun 2023, Last Modified: 26 Jun 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: With the increasing popularity of graph neural networks (GNNs) in several sensitive applications like healthcare and medicine, concerns have been raised over the privacy aspects of trained GNNs. More notably, GNNs are vulnerable to privacy attacks, such as membership inference attacks, even if only black-box access to the trained model is granted. We propose PRIVGNN, a privacy-preserving framework for releasing GNN models in a centralized setting. Assuming an access to a public unlabeled graph, PRIVGNN provides a framework to release GNN models trained explicitly on public data along with knowledge obtained from the private data in a privacy preserving manner. PRIVGNN combines the knowledge-distillation framework with the two noise mechanisms, random subsampling, and noisy labeling, to ensure rigorous privacy guarantees. We theoretically analyze our approach in the Rènyi differential privacy framework. Besides, we show the solid experimental performance of our method compared to several baselines adapted for graph-structured data. Our code is available at https://github.com/iyempissy/privGnn.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: In our revision, we have made the following modifications: - The references from Appendix E have been integrated into Section 2. - Section 2 has been updated. - The abstract and introduction now explicitly state the settings of our paper. - We have addressed other minor errors and made necessary corrections.
Code: https://github.com/iyempissy/privGnn
Assigned Action Editor: ~Aurélien_Bellet1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 779
Loading