Abstract: Although Generative Adversarial Networks achieve state-of-the-art results on a
variety of generative tasks, they are regarded as highly unstable and prone to miss
modes. We argue that these bad behaviors of GANs are due to the very particular
functional shape of the trained discriminators in high dimensional spaces, which
can easily make training stuck or push probability mass in the wrong direction,
towards that of higher concentration than that of the data generating distribution.
We introduce several ways of regularizing the objective, which can dramatically
stabilize the training of GAN models. We also show that our regularizers can help
the fair distribution of probability mass across the modes of the data generating
distribution during the early phases of training, thus providing a unified solution
to the missing modes problem.
Conflicts: umontreal.ca
Keywords: Deep learning, Unsupervised Learning
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/arxiv:1612.02136/code)
27 Replies
Loading