Private GANs, RevisitedDownload PDF

03 Oct 2022 (modified: 14 Apr 2024)Neurips 2022 SyntheticData4MLReaders: Everyone
Keywords: differential privacy, GAN, synthetic data, generative models, image synthesis
TL;DR: We show private GANs perform well, after more careful tuning.
Abstract: We show that with improved training, the standard approach for differentially private GANs – updating the discriminator with noisy gradients – achieves or competes with state-of-the-art results for private image synthesis. Existing instantiations of this approach neglect to consider how adding noise only to discriminator updates disrupts the careful balance between generator and discriminator necessary for successful GAN training. We show that a simple fix restores parity: taking more discriminator steps between generator steps. Furthermore, with the goal of restoring parity, we experiment with further modifications to improve discriminator training and see further improvements in generation quality. For MNIST at a privacy budget of ε = 10, our private GANs improve the record FID from 48.4 to 13.0, as well as downstream classifier accuracy from 83.2% to 95.0%.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/arxiv:2302.02936/code)
4 Replies

Loading