Incremental training of multi-generative adversarial networksDownload PDF

27 Sept 2018 (modified: 05 May 2023)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: Generative neural networks map a standard, possibly distribution to a complex high-dimensional distribution, which represents the real world data set. However, a determinate input distribution as well as a specific architecture of neural networks may impose limitations on capturing the diversity in the high dimensional target space. To resolve this difficulty, we propose a training framework that greedily produce a series of generative adversarial networks that incrementally capture the diversity of the target space. We show theoretically and empirically that our training algorithm converges to the theoretically optimal distribution, the projection of the real distribution onto the convex hull of the network's distribution space.
Keywords: GAN, Incremental training, Information projection, Mixture distribution
TL;DR: We propose a new method to incrementally train a mixture generative model to approximate the information projection of the real data distribution.
Data: [CelebA](https://paperswithcode.com/dataset/celeba), [MNIST](https://paperswithcode.com/dataset/mnist)
7 Replies

Loading