GRAIN: Exact Graph Reconstruction from Gradients

ICLR 2025 Conference Submission12256 Authors

27 Sept 2024 (modified: 22 Nov 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: gradient leakage, gradient inversion, graph neural networks, federated learning, graph convolutional networks, gnn, gcn, attack, privacy, reconstruction
TL;DR: We present GRAIN, the first gradient leakage attack designed specifically for graph neural networks, and show we achieve a high fraction of exact reconstructions, and outperform existing attacks on partial reconstruction.
Abstract: Federated learning allows multiple parties to train collaboratively while only Federated learning allows multiple parties to train collaboratively while only sharing gradient updates. However, recent work has shown that it is possible to exactly reconstruct private data such as text and images from gradients for both fully connected and transformer layers in the honest-but-curious setting. In this work, we present GRAIN, the first exact reconstruction attack on graph-structured data that recovers both the structure of the graph and the associated node features. Concretely, we focus on Graph Convolutional Networks (GCN), a powerful framework for learning on graphs. Our method first utilizes the low-rank structure of GCN layer updates to efficiently reconstruct and filter building blocks, which are subgraphs of the input graph. These building blocks are then joined to complete the input graph. Our experimental evaluation on molecular datasets shows that GRAIN can perfectly reconstruct up to 70\% of all molecules, compared to at most 20\% correctly positioned nodes and 32\% recovered node features for the baseline.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 12256
Loading