Keywords: vae, posterior collapse, generative, generative model, latent variable, probabilistic model, pca, ppca, probabilistic pca
TL;DR: We show that posterior collapse in linear VAEs is caused entirely by marginal log-likelihood (not ELBO). Experiments on deep VAEs suggest a similar phenomenon is at play.
Abstract: Posterior collapse in Variational Autoencoders (VAEs) arises when the variational distribution closely matches the uninformative prior for a subset of latent variables. This paper presents a simple and intuitive explanation for posterior collapse through the analysis of linear VAEs and their direct correspondence with Probabilistic PCA (pPCA). We identify how local maxima can emerge from the marginal log-likelihood of pPCA, which yields similar local maxima for the evidence lower bound (ELBO). We show that training a linear VAE with variational inference recovers a uniquely identifiable global maximum corresponding to the principal component directions. We provide empirical evidence that the presence of local maxima causes posterior collapse in deep non-linear VAEs. Our findings help to explain a wide range of heuristic approaches in the literature that attempt to diminish the effect of the KL term in the ELBO to reduce posterior collapse.
3 Replies
Loading