Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Improving Sampling from Generative Autoencoders with Markov Chains
Antonia Creswell, Kai Arulkumaran, Anil Anthony Bharath
Oct 31, 2016 (modified: Jan 12, 2017)ICLR 2017 conference submissionreaders: everyone
Abstract:We focus on generative autoencoders, such as variational or adversarial autoencoders, which jointly learn a generative model alongside an inference model. Generative autoencoders are those which are trained to softly enforce a prior on the latent distribution learned by the inference model. We call the distribution to which the inference model maps observed samples, the learned latent distribution, which may not be consistent with the prior. We formulate a Markov chain Monte Carlo (MCMC) sampling process, equivalent to iteratively decoding and encoding, which allows us to sample from the learned latent distribution. Since, the generative model learns to map from the learned latent distribution, rather than the prior, we may use MCMC to improve the quality of samples drawn from the generative model, especially when the learned latent distribution is far from the prior. Using MCMC sampling, we are able to reveal previously unseen differences between generative autoencoders trained either with or without a denoising criterion.
TL;DR:Iteratively encoding and decoding samples from generative autoencoders recovers samples from the true latent distribution learned by the model
Keywords:Deep learning, Unsupervised Learning, Theory
Enter your feedback below and we'll get back to you as soon as possible.