Learning Privacy-Preserving Graph Embeddings Against Sensitive Attributes InferenceDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Inference privacy, differential privacy, graph representation
Abstract: We focus on preserving the privacy of some sensitive attributes associated with certain private nodes on a graph when releasing graph data. Notably, deleting the sensitive attributes from the graph data cannot resist adversarial attacks because an adversary can still leverage the graph structure information and the non-sensitive node features to predict the sensitive attributes. We propose a framework to learn graph embeddings insensitive to the changes of certain specified sensitive attributes while maximally preserving the graph structure information and non-sensitive node features for downstream tasks. The key ingredient of our framework is a novel conditional variational graph autoencoder (CVGAE), which captures the relationship between the learned embeddings and the sensitive attributes. This allows us to quantify the privacy loss that can be used for penalizing privacy leakage when learning graph embeddings without adversarial training.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Social Aspects of Machine Learning (eg, AI safety, fairness, privacy, interpretability, human-AI interaction, ethics)
TL;DR: Preserving the inference privacy of certain sensitive attributes associated with graph nodes for graph representation learning.
4 Replies

Loading