Coverage and Quality Driven Training of Generative Image Models

Thomas LUCAS, Konstantin SHMELKOV, Karteek ALAHARI, Cordelia SCHMID, Jakob VERBEEK

Sep 27, 2018 ICLR 2019 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: Generative modeling of natural images has been extensively studied in recent years, yielding remarkable progress. Current state-of-the-art methods are either based on maximum likelihood estimation or adversarial training. Both methods have their own drawbacks, which are complementary in nature. The first leads to over-generalization as the maximum likelihood criterion encourages models to cover the support of the training data by heavily penalizing small masses assigned to training data. Simplifying assumptions in such models limits their capacity and makes them spill mass on unrealistic samples. The second leads to mode-dropping since adversarial training encourages high quality samples from the model, but only indirectly enforces diversity among the samples. To overcome these drawbacks we make two contributions. First, we propose a model that extends variational autoencoders by using deterministic invertible transformation layers to map samples from the decoder to the image space. This induces correlations among the pixels given the latent variables, improving over factorial decoders commonly used in variational autoencoders. Second, we propose a unified training approach that leverages coverage and quality based criteria. Our models obtain likelihood scores competitive with state-of-the-art likelihood-based models, while achieving sample quality typical of adversarially trained networks.
  • Keywords: deep learning, generative modeling, unsupervised learning, maximum likelihood, adversarial learning, gan, vae
  • TL;DR: Generative models that yield Gan-like samples and achieve competitive likelihood on held-out data.
0 Replies

Loading