Multi-Sample Contrastive Neural Topic Model as Multi-Task LearningDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Abstract: Recent representation learning approaches to polish global semantics of neural topic models optimize the weighted linear combination of the evidence lower bound (ELBO) of the log-likelihood and the discriminative objective that contrasts instance pairings. However, contrastive learning on the individual level might capture noisy mutual information that is irrelevant to the topic modeling task. Moreover, there is a potential conflict between the ELBO loss that memorizes input details for better reconstruction quality, and the contrastive term which attempts to generalize representations among inputs. To address the issues, we firstly hypothesize that useful features should be shared among multiple input samples. For that reason, we propose a novel set-based contrastive learning method for neural topic models to employ the concept of multi-sample representation learning. Secondly, because the solution of the linear combination approach might not satisfy all objectives when they compete, we explicitly cast contrastive topic modeling as gradient-based multi-objective optimization, with the goal of achieving a Pareto stationary solution. Extensive experiments demonstrate that our framework consistently produces higher-performing neural topic models in terms of topic coherence and downstream performance.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Applications (eg, speech processing, computer vision, NLP)
Supplementary Material: zip
11 Replies

Loading