Keywords: Differential privacy, diffusion models, trustworthy AI, Generative Models
TL;DR: This paper provides convergence analysis of differentially private diffusion models, establishing Wasserstein-2 distance bounds for DP-SGD applied to score-matching objectives and revealing fundamental privacy-utility-dimensionality tradeoffs.
Abstract: Score-based diffusion models have emerged as popular generative models trained on increasingly large datasets, yet they are often susceptible to attacks that can disclose sensitive information. To offer Differential Privacy (DP) guarantees, training these models for score-matching with DP-SGD has become a common solution. In this work, we study Differentially Private Diffusion Models (DPDM) both theoretically and empirically. We provide a quantitative $L^2$ rate of DP-SGD to its global optimum, leading to the first error analysis of diffusion models trained with DP-SGD. Our theoretical framework contributes to uncertainty quantification in generative AI systems, providing essential convergence guarantees for trustworthy decision-making applications that require both privacy preservation and reliability.
Submission Number: 118
Loading