Differentially Private Diffusion Models

Published: 08 Sept 2023, Last Modified: 08 Sept 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: While modern machine learning models rely on increasingly large training datasets, data is often limited in privacy-sensitive domains. Generative models trained with differential privacy (DP) on sensitive data can sidestep this challenge, providing access to synthetic data instead. We build on the recent success of diffusion models (DMs) and introduce Differentially Private Diffusion Models (DPDMs), which enforce privacy using differentially private stochastic gradient descent (DP-SGD). We investigate the DM parameterization and the sampling algorithm, which turn out to be crucial ingredients in DPDMs, and propose noise multiplicity, a powerful modification of DP-SGD tailored to the training of DMs. We validate our novel DPDMs on image generation benchmarks and achieve state-of-the-art performance in all experiments. Moreover, on standard benchmarks, classifiers trained on DPDM-generated synthetic data perform on par with task-specific DP-SGD-trained classifiers, which has not been demonstrated before for DP generative models. Project page and code: https://nv-tlabs.github.io/DPDM.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Added authors and changed modifications in blue back to black.
Code: https://github.com/nv-tlabs/DPDM
Assigned Action Editor: ~Simon_Lacoste-Julien1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1099
Loading