On the Quantitative Analysis of Decoder-Based Generative ModelsDownload PDF

Published: 06 Feb 2017, Last Modified: 22 Oct 2023ICLR 2017 PosterReaders: Everyone
Abstract: The past several years have seen remarkable progress in generative models which produce convincing samples of images and other modalities. A shared component of some popular models such as generative adversarial networks and generative moment matching networks, is a decoder network, a parametric deep neural net that defines a generative distribution. Unfortunately, it can be difficult to quantify the performance of these models because of the intractability of log-likelihood estimation, and inspecting samples can be misleading. We propose to use Annealed Importance Sampling for evaluating log-likelihoods for decoder-based models and validate its accuracy using bidirectional Monte Carlo. Using this technique, we analyze the performance of decoder-based models, the effectiveness of existing log-likelihood estimators, the degree of overfitting, and the degree to which these models miss important modes of the data distribution.
TL;DR: We propose to use Annealed Importance Sampling to evaluate decoder-based generative network, and investigate various properties of these models.
Conflicts: cs.toronto.edu, cs.cmu.edu, umontreal.ca, ttic.edu, openai.com
Keywords: Deep learning, Unsupervised Learning
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 5 code implementations](https://www.catalyzex.com/paper/arxiv:1611.04273/code)
18 Replies

Loading