Improving Variational Autoencoders with Density Gap-based RegularizationDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 12 Oct 2022, 10:50NeurIPS 2022 AcceptReaders: Everyone
Keywords: Variational Autoencoders, Posterior Collapse, Hole Problem
TL;DR: We propose a novel Density Gap-based regularization for VAEs to both solve posterior collapse and avoid hole problem.
Abstract: Variational autoencoders (VAEs) are one of the most powerful unsupervised learning frameworks in NLP for latent representation learning and latent-directed generation. The classic optimization goal of VAEs is to maximize the Evidence Lower Bound (ELBo), which consists of a conditional likelihood for generation and a negative Kullback-Leibler (KL) divergence for regularization. In practice, optimizing ELBo often leads the posterior distribution of all samples converging to the same degenerated local optimum, namely posterior collapse or KL vanishing. There are effective ways proposed to prevent posterior collapse in VAEs, but we observe that they in essence make trade-offs between posterior collapse and the hole problem, i.e., the mismatch between the aggregated posterior distribution and the prior distribution. To this end, we introduce new training objectives to tackle both problems through a novel regularization based on the probabilistic density gap between the aggregated posterior distribution and the prior distribution. Through experiments on language modeling, latent space visualization, and interpolation, we show that our proposed method can solve both problems effectively and thus outperforms the existing methods in latent-directed generation. To the best of our knowledge, we are the first to jointly solve the hole problem and posterior collapse.
Supplementary Material: zip
14 Replies

Loading