Neural Variational Sparse Topic Model

Anonymous

Nov 03, 2017 (modified: Nov 03, 2017) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: Effectively inferring discriminative and coherent latent topics of short texts is a critical task for many real world applications. Nevertheless, the task has been proven to be a great challenge due to the data sparsity problem induced by the characteristics of short texts. A large number of topic models have been proposed to deal with the data sparsity problem in short texts. However, the complex and rigorous inference algorithm become a bottleneck for these traditional models to rapidly explore variations. In this paper, we propose a novel model called Neural Variational Sparse Topic Model (NVSTM) based on a popular model sparsityenhanced topic model named Sparse Topical Coding (STC). In the model, the auxiliary word embeddings are utilized to improve the generation of representations. The Variational Autoencoder (VAE) approach is applied to inference the model efficiently, which makes the model easy to explore extensions for its blackbox inference process. Experimental results onWeb Snippets and 20NewsGroups datasets show the effectiveness and efficiency of the model.
  • TL;DR: a neural sparsity-enhanced topic model based on VAE
  • Keywords: Variational Autoencoder, Sparse Topical Coding, Neural Variational Inference

Loading