Reconstructing Training Data with Informed AdversariesDownload PDF

Published: 04 Nov 2021, Last Modified: 22 Oct 2023PRIML 2021 PosterReaders: Everyone
Keywords: Reconstruction attacks
Abstract: Given access to a machine learning model, can an adversary reconstruct the model’s training data? This work proposes a formal threat model to study this question, shows that reconstruction attacks are feasible in theory and in practice, and presents preliminary results assessing how different factors of standard machine learning pipelines affect the success of reconstruction. Finally, we empirically evaluate what levels of differential privacy suffice to prevent reconstruction attacks.
Paper Under Submission: The paper is NOT under submission at NeurIPS
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2201.04845/code)
1 Reply

Loading