Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Epitomic Variational Autoencoders
Serena Yeung, Anitha Kannan, Yann Dauphin, Li Fei-Fei
Nov 04, 2016 (modified: Jan 19, 2017)ICLR 2017 conference submissionreaders: everyone
Abstract:In this paper, we propose epitomic variational autoencoder (eVAE), a probabilistic generative model of high dimensional data. eVAE is composed of a number of sparse variational autoencoders called `epitome' such that each epitome partially shares its encoder-decoder architecture with other epitomes in the composition. We show that the proposed model greatly overcomes the common problem in variational autoencoders (VAE) of model over-pruning. We substantiate that eVAE is efficient in using its model capacity and generalizes better than VAE, by presenting qualitative and quantitative results on MNIST and TFD datasets.
TL;DR:We introduce an extension of variational autoencoders that learns multiple shared latent subspaces to address the issue of model capacity underutilization.
Conflicts:stanford.edu, fb.com, montreal.ca
Enter your feedback below and we'll get back to you as soon as possible.