Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Memorization Precedes Generation: Learning Unsupervised GANs with Memory Networks
Youngjin Kim, Minjung Kim, Gunhee Kim
Feb 15, 2018 (modified: Feb 15, 2018)ICLR 2018 Conference Blind Submissionreaders: everyoneShow Bibtex
Abstract:We propose an approach to address two undesired properties of unsupervised GANs. First, since GANs use only a continuous latent distribution to embed multiple classes or clusters of a dataset, GANs often do not correctly handle the structural discontinuity between disparate classes in a latent space. Second, discriminators of GANs easily forget about past generated samples by generators, incurring instability during adversarial training. We argue that these two infamous problems of unsupervised GANs can be largely alleviated by a memory structure to which both generators and discriminators can access. Generators can effectively store a large amount of training samples that are needed to understand the underlying cluster distribution, which eases the structure discontinuity problem. At the same time, discriminators can memorize previously generated samples, which mitigate the forgetting problem. We propose a novel end-to-end GAN model named memoryGAN, that involves a memory network that can be trained in an unsupervised manner, and integrable to many existing models of GANs. With evaluations on multiple datasets including Fashion-MNIST, CelebA, CIFAR10, and Chairs, we show that our model is probabilistically interpretable, and generates image samples of high visual fidelity. We also show that our memoryGAN also achieves the state-of-the-art inception scores among unsupervised GAN models on the CIFAR10 dataset, without additional tricks or weaker divergences.