Keywords: Generative AI, Diffusion models, Differential privacy, Privacy leakage, Synthetic data, Memorization
TL;DR: We propose a differentially private (DP) generation method in diffusion models that generates high fidelity synthetic samples while simultaneously providing DP guarentees on privacy lekage.
Abstract: Diffusion based generative models achieve unprecedented image quality but are known to leak private information about the training data. Our goal is to provide provable guarantees on privacy leakage of training data while simultaneously enabling generation of high-fidelity samples. Our proposed approach first non-privately trains an ensemble of diffusion models and then aggregates their prediction to provide privacy guarantees for generated samples. We demonstrate the success of our approach on the MNIST and CIFAR-10.
Submission Number: 41
Loading