Towards Hierarchical Discrete Variational AutoencodersDownload PDF

16 Oct 2019 (modified: 05 May 2023)AABI 2019Readers: Everyone
Keywords: deep learning, representation learning, variational auto-encoders, variational inference, VAE, generative models, discrete latent variable, Gumbel-softmax
TL;DR: In this paper, we introduce a discrete hierarchy of categorical latent variables that we train using the Concrete/Gumbel-Softmax relaxation and we derive an upper bound for the absolute difference between the unbiased and the biased objective.
Abstract: Variational Autoencoders (VAEs) have proven to be powerful latent variable models. How- ever, the form of the approximate posterior can limit the expressiveness of the model. Categorical distributions are flexible and useful building blocks for example in neural memory layers. We introduce the Hierarchical Discrete Variational Autoencoder (HD-VAE): a hi- erarchy of variational memory layers. The Concrete/Gumbel-Softmax relaxation allows maximizing a surrogate of the Evidence Lower Bound by stochastic gradient ascent. We show that, when using a limited number of latent variables, HD-VAE outperforms the Gaussian baseline on modelling multiple binary image datasets. Training very deep HD-VAE remains a challenge due to the relaxation bias that is induced by the use of a surrogate objective. We introduce a formal definition and conduct a preliminary theoretical and empirical study of the bias.
0 Replies

Loading