AGE: Enhancing the Convergence on GANs using Alternating extra-gradient with Gradient ExtrapolationDownload PDF

Published: 08 Dec 2021, Last Modified: 05 May 2023DGMs and Applications @ NeurIPS 2021 OralReaders: Everyone
Keywords: GAN, Gradient Extrapolation
TL;DR: Enhance training GAN by extrapolating past gradients in a nonlinear way
Abstract: Generative adversarial networks (GANs) are notably difficult to train since the parameters can get stuck in a local optimum. As a result, methods often suffer not only from degeneration of the convergence speed but also from limitations in the representational power of the trained network. Existing optimization methods to stabilize convergence require multiple gradient computations per iteration. We propose AGE, an alternating extra-gradient method with nonlinear gradient extrapolation, that overcomes these computational inefficiencies and exhibits better convergence properties. It estimates the lookahead step using a nonlinear mixing of past gradient sequences. Empirical results on CIFAR10, CelebA, and several synthetic datasets demonstrate that the introduced approach significantly improves convergence and yields better generative models.
1 Reply

Loading