Pre-trained Perceptual Features Improve Differentially Private Image Generation

Published: 21 Apr 2023, Last Modified: 19 Jul 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Training even moderately-sized generative models with differentially-private stochastic gradient descent (DP-SGD) is difficult: the required level of noise for reasonable levels of privacy is simply too large. We advocate instead building off a good, relevant representation on an informative public dataset, then learning to model the private data with that representation. In particular, we minimize the maximum mean discrepancy (MMD) between private target data and a generator's distribution, using a kernel based on perceptual features learned from a public dataset. With the MMD, we can simply privatize the data-dependent term once and for all, rather than introducing noise at each step of optimization as in DP-SGD. Our algorithm allows us to generate CIFAR10-level images with $\epsilon \approx 2$ which capture distinctive features in the distribution, far surpassing the current state of the art, which mostly focuses on datasets such as MNIST and FashionMNIST at a large $\epsilon \approx 10$. Our work introduces simple yet powerful foundations for reducing the gap between private and non-private deep generative models. Our code is available at
Submission Length: Regular submission (no more than 12 pages of main content)
Supplementary Material: pdf
Changes Since Last Submission: NA
Assigned Action Editor: ~Aurélien_Bellet1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 799