GraphPrivatizer: Improved Structural Differential Privacy for Graph Neural Networks

TMLR Paper2702 Authors

16 May 2024 (modified: 10 Jul 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Graph privacy is crucial in systems that present a graph structure where the confidentiality and privacy of participants play a significant role in the integrity of the system itself. For instance, it is necessary to ensure the integrity of banking systems and transaction networks, protecting the privacy of customers' financial information and transaction details. We propose a method called GraphPrivatizer that privatizes the structure of a graph and protects it under Differential Privacy. GraphPrivatizer performs a controlled perturbation of the graph structure by randomly replacing the neighbors of a node with other similar neighbors, according to some similarity metric. With regard to neighbor perturbation, we find that aggregating features to compute similarities and imposing a minimum similarity score between the original and the replaced nodes provides the best privacy-utility trade-off. We use our method to train a Graph Neural Network server-side without disclosing users' private information to the server. We conduct experiments on real-world graph datasets and empirically evaluate the privacy of our models against privacy attacks.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We have: * expanded our discussion of the related work and of our methodology/approach; * expanded on our motivation and provided more considerations about local differential privacy; * increased the clarity of our notation and exposition of section 4.2; * provided more justifications and experimental results (appendix B) to motivate the choice of the cosine similarity as a similarity metric; * linked a repository with our code; * added a discussion of limitations and future work.
Assigned Action Editor: ~Giannis_Nikolentzos1
Submission Number: 2702
Loading