Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Variance Regularizing Adversarial Learning
Karan Grewal, R Devon Hjelm, Yoshua Bengio
Feb 15, 2018 (modified: Feb 15, 2018)ICLR 2018 Conference Blind Submissionreaders: everyoneShow Bibtex
Abstract:We study how, in generative adversarial networks, variance in the discriminator's output affects the generator's ability to learn the data distribution. In particular, we contrast the results from various well-known techniques for training GANs when the discriminator is near-optimal and updated multiple times per update to the generator. As an alternative, we propose an additional method to train GANs by explicitly modeling the discriminator's output as a bi-modal Gaussian distribution over the real/fake indicator variables. In order to do this, we train the Gaussian classifier to match the target bi-modal distribution implicitly through meta-adversarial training. We observe that our new method, when trained together with a strong discriminator, provides meaningful, non-vanishing gradients.
TL;DR:We introduce meta-adversarial learning, a new technique to regularize GANs, and propose a training method by explicitly controlling the discriminator's output distribution.
Keywords:Generative Adversarial Network, Integral Probability Metric, Meta-Adversarial Learning
Enter your feedback below and we'll get back to you as soon as possible.