On the Inadequacy of Similarity-based Privacy Metrics: Reconstruction Attacks against ``Truly Anonymous Synthetic Data''

22 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: societal considerations including fairness, safety, privacy
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: synthetic data, privacy metrics, reconstruction attacks, differential privacy, generative models
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We demonstrate the inadequacy of commonly used similarity-based privacy metrics to guarantee privacy in synthetic data though analysis, counter-examples, and a novel reconstruction attack.
Abstract: Training generative models to produce synthetic data is meant to provide a privacy-friendly approach to data release. However, we get robust guarantees only when models are trained to satisfy Differential Privacy (DP). Alas, this is not the standard in industry as many companies use ad-hoc strategies to empirically evaluate privacy based on the statistical {\em similarity} between synthetic and real data. In this paper, we review the privacy metrics offered by leading companies in this space and shed light on a few critical flaws in reasoning about privacy entirely via empirical evaluations. We analyze the undesirable properties of the metrics and filters they use and demonstrate their unreliability and inconsistency through counter-examples. We then present a reconstruction attack, \emph{ReconSyn}, which successfully recovers (i.e., leaks all the attributes of) at least 78\% of the low-density train records (or outliers) with only black-box access to a single fitted generative model and the privacy metrics. Finally, we show that applying DP or using generators with low utility does not successfully mitigate \emph{ReconSyn} as the privacy leakage still comes from access to the metrics. Overall, our work serves as a warning to practitioners not to deviate from established privacy-preserving mechanisms.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6385
Loading