Membership Inference Attack on Diffusion Models via Quantile Regression

Published: 27 Oct 2023, Last Modified: 12 Dec 2023RegML 2023EveryoneRevisionsBibTeX
Keywords: membership inference, diffusion models
Abstract: Recently, diffusion models have demonstrated great potential for image synthesis due to their ability to generate high-quality synthetic data. However, when applied to sensitive data, privacy concerns have been raised about these models. In this paper, we evaluate the privacy risks of diffusion models through a \emph{membership inference (MI) attack}, which aims to identify whether a target example is in the training set when given the trained diffusion model. Our proposed MI attack learns a single quantile regression model that predicts (a quantile of) the distribution of reconstruction loss for each example. This enables us to identify a unique threshold on the reconstruction loss tailored to each example when determining their membership status. We show that our attack outperforms the prior state-of-the-art MI attack and avoids their high computational cost from training multiple shadow models. Consequently, our work enriches the set of practical tools for auditing the privacy risks of large-scale generative models.
Submission Number: 47
Loading