SGD Learns One-Layer Networks in WGANsDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
TL;DR: We show that stochastic gradient descent ascent converges to a global optimum for WGAN with one-layer generator network.
Abstract: Generative adversarial networks (GANs) are a widely used framework for learning generative models. Wasserstein GANs (WGANs), one of the most successful variants of GANs, require solving a minmax problem to global optimality, but in practice, are successfully trained with stochastic gradient descent-ascent. In this paper, we show that, when the generator is a one-layer network, stochastic gradient descent-ascent converges to a global solution in polynomial time and sample complexity.
Code: https://colab.research.google.com/drive/1P1hBwPcq21oj2IroX1rWXUJ7NUCD33CH
Keywords: Wasserstein GAN, global min-max, one-layer network
Original Pdf: pdf
7 Replies

Loading