Annealed Generative Adversarial NetworksDownload PDF

21 Nov 2024 (modified: 14 Mar 2017)ICLR 2017Readers: Everyone
Abstract: Generative Adversarial Networks (GANs) have recently emerged as powerful generative models. GANs are trained by an adversarial process between a generative network and a discriminative network. It is theoretically guaranteed that, in the nonparametric regime, by arriving at the unique saddle point of a minimax objective function, the generative network generates samples from the data distribution. However, in practice, getting close to this saddle point has proven to be difficult, resulting in the ubiquitous problem of “mode collapse”. The root of the problems in training GANs lies on the unbalanced nature of the game being played. Here, we propose to level the playing field and make the minimax game balanced by heating the data distribution. The empirical distribution is frozen at temperature zero; GANs are instead initialized at infinite temperature, where learning is stable. By annealing the heated data distribution, we initialized the network at each temperature with the learnt parameters of the previous higher temperature. We posited a conjecture that learning under continuous annealing in the nonparametric regime is stable, and proposed an algorithm in corollary. In our experiments, the annealed GAN algorithm, dubbed beta-GAN, trained with unmodified objective function was stable and did not suffer from mode collapse.
TL;DR: We introduce a new algorithm to stabilize the training of generative adversarial networks and address the problem of mode collapse by “heating” the data distribution in an annealing framework.
Keywords: Deep learning, Unsupervised Learning
Conflicts: tuebingen.mpg.de, berkeley.edu
8 Replies

Loading