Epitomic Variational AutoencodersDownload PDF

25 Apr 2024 (modified: 21 Jul 2022)Submitted to ICLR 2017Readers: Everyone
Abstract: In this paper, we propose epitomic variational autoencoder (eVAE), a probabilistic generative model of high dimensional data. eVAE is composed of a number of sparse variational autoencoders called `epitome' such that each epitome partially shares its encoder-decoder architecture with other epitomes in the composition. We show that the proposed model greatly overcomes the common problem in variational autoencoders (VAE) of model over-pruning. We substantiate that eVAE is efficient in using its model capacity and generalizes better than VAE, by presenting qualitative and quantitative results on MNIST and TFD datasets.
TL;DR: We introduce an extension of variational autoencoders that learns multiple shared latent subspaces to address the issue of model capacity underutilization.
Keywords: Unsupervised Learning
Conflicts: stanford.edu, fb.com, montreal.ca
11 Replies

Loading