Adam is no better than normalized SGD: Dissecting how adaptivity improves GAN performanceDownload PDF


Sep 29, 2021 (edited Nov 22, 2021)ICLR 2022 Conference Blind SubmissionReaders: Everyone
  • Keywords: Generative adversarial networks, non-convex optimization
  • Abstract: Adaptive methods are widely used for training generative adversarial networks (GAN). While there has been some work to pinpoint the marginal value of adaptive methods in minimization problems, it remains unclear why it is still the method of choice for GAN training. This paper formally studies how adaptive methods help performance in GANs. First, we dissect Adam---the most popular adaptive method for GAN training---by comparing with SGDA the direction and the norm of its update vector. We empirically show that SGDA with the same vector norm as Adam reaches similar or even better performance than the latter. This empirical study encourages us to consider normalized stochastic gradient descent ascent (nSGDA) as a simpler alternative to Adam. We then propose a synthetic theoretical framework to understand why nSGDA yields better performance than SGDA for GANs. In that situation, we prove that a GAN trained with nSGDA provably recovers all the modes of the true distribution. In contrast, the same networks trained with SGDA (and any learning rate configuration) suffers from mode collapsing. The critical insight in our analysis is that normalizing the gradients forces the discriminator and generator to update at the same pace. We empirically show the competitive performance of nSGDA on real-world datasets.
  • Supplementary Material: zip
8 Replies