Node-Level Differentially Private Graph Neural Networks

TMLR Paper733 Authors

27 Dec 2022 (modified: 28 Feb 2023)Withdrawn by AuthorsEveryoneRevisionsBibTeX
Abstract: Graph Neural Networks (GNNs) are a popular technique for modelling graph-structured data and computing node-level representations via aggregation of information from the neighborhood of each node. However, this aggregation implies increased risk of revealing sensitive information, as a node can participate in the inference for multiple nodes. This implies that standard privacy preserving machine learning techniques, such as differentially private stochastic gradient descent (DP-SGD) - which are designed for situations where each data point participates in the inference for one point only - either do not apply, or lead to inaccurate models. In this work, we formally define the problem of learning GNN parameters with node-level privacy, and provide an algorithmic solution with a strong differential privacy guarantee. We employ a careful sensitivity analysis and provide a nontrivial extension of the privacy-by-amplification technique to the GNN setting. An empirical evaluation on standard benchmark datasets demonstrates that our method is indeed able to learn accurate privacy-preserving GNNs which outperform both private and non-private methods that completely ignore graph information.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Addresses reviewer comments.
Assigned Action Editor: ~Yiming_Ying1
Submission Number: 733
Loading