Private GANs, Revisited

Published: 12 Oct 2023, Last Modified: 17 Sept 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: We show that the canonical approach for training differentially private GANs -- updating the discriminator with differentially private stochastic gradient descent (DPSGD) -- can yield significantly improved results after modifications to training. Specifically, we propose that existing instantiations of this approach neglect to consider how adding noise only to discriminator updates inhibits discriminator training, disrupting the balance between the generator and discriminator necessary for successful GAN training. We show that a simple fix -- taking more discriminator steps between generator steps -- restores parity between the generator and discriminator and improves results. Additionally, with the goal of restoring parity, we experiment with other modifications -- namely, large batch sizes and adaptive discriminator update frequency -- to improve discriminator training and see further improvements in generation quality. Our results demonstrate that on standard image synthesis benchmarks, DPSGD outperforms all alternative GAN privatization schemes. Code: https://github.com/alexbie98/dpgan-revisit.
Certifications: Survey Certification
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Camera-ready. Final changes suggested by AE + link to code.
Code: https://github.com/alexbie98/dpgan-revisit
Assigned Action Editor: ~Yu-Xiang_Wang1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 844
Loading