Variational Autoencoders for Text Modeling without Weakening the DecoderDownload PDF

27 Sep 2018 (modified: 26 Nov 2018)ICLR 2019 Conference Withdrawn SubmissionReaders: Everyone
  • Abstract: Previous work (Bowman et al., 2015; Yang et al., 2017) has found difficulty developing generative models based on variational autoencoders (VAEs) for text. To address the problem of the decoder ignoring information from the encoder (posterior collapse), these previous models weaken the capacity of the decoder to force the model to use information from latent variables. However, this strategy is not ideal as it degrades the quality of generated text and increases hyper-parameters. In this paper, we propose a new VAE for text utilizing a multimodal prior distribution, a modified encoder, and multi-task learning. We show our model can generate well-conditioned sentences without weakening the capacity of the decoder. Also, the multimodal prior distribution improves the interpretability of acquired representations.
  • Keywords: variational autoencoders, generative model, deep neural network, text modeling, unsupervised learning, multimodal
  • TL;DR: We propose a model of variational autoencoders for text modeling without weakening the decoder, which improves the quality of text generation and interpretability of acquired representations.
11 Replies