Abstract: Neural topic models (NTMs) have shown their success in topic modeling with a wide range of applications in text analysis. NTMs based on generative models prioritize document representations with good reconstruction capabilities, but they are they are insufficient in preserving distances between documents in the topic space. To bridge this gap, inspired by manifold learning, we propose a neural topic model that enables the reflection of word-to-word relationships onto topic-to-topic associations. This is achieved by approximating the distances between documents in the word space within the topic space. Extensive experiments demonstrate that the proposed model outperforms state-of-the-art NTMs in improving the quality of learned topics, as evidenced by metrics such as purity, diversity, coherence. Beyond that, the model can provide more interpretable low dimensional visualizations of documents.
Loading