WEAKLY SEMI-SUPERVISED NEURAL TOPIC MODELSDownload PDF

Published: 17 Apr 2019, Last Modified: 05 May 2023LLD 2019Readers: Everyone
Keywords: topic models, variational inference, variational autoencoder, semi-supervised, deep learning
TL;DR: We propose supervising VAE-style topic models by intelligently adjusting the prior on a per document basis. We find a logit-normal posterior provides the best performance.
Abstract: We consider the problem of topic modeling in a weakly semi-supervised setting. In this scenario, we assume that the user knows a priori a subset of the topics she wants the model to learn and is able to provide a few exemplar documents for those topics. In addition, while each document may typically consist of multiple topics, we do not assume that the user will identify all its topics exhaustively. Recent state-of-the-art topic models such as NVDM, referred to herein as Neural Topic Models (NTMs), fall under the variational autoencoder framework. We extend NTMs to the weakly semi-supervised setting by using informative priors in the training objective. After analyzing the effect of informative priors, we propose a simple modification of the NVDM model using a logit-normal posterior that we show achieves better alignment to user-desired topics versus other NTM models.
3 Replies

Loading