Autoregressive Language Model for Zero-shot Constrained Keyphrase GenerationDownload PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: Recently, most of the state-of-the-art keyphrase prediction models are based on a supervised generative model.It shows significantly better than before. Nevertheless, it still faces domain robustness and building datasets on high-resource. To overcome these limitations, unsupervised methods have also been critical and studied. We analyzed it also have a defect in a necessary process, which extracts candidates beforehand selecting keyphrase. As not including various forms of phrases, we note that the unsupervised method can't ensure oracle keyphrase.In this paper, we present zero-shot constrained keyphrase generation by leveraging a large-scale language model. To generate diverse keyphrases, we explore controlling a phrase during the generation. Finally, we evaluate benchmark datasets of the scholar domain. It results in better performances than unsupervised methods on several datasets without going through the candidate extraction stage. For domain robustness, we evaluate out-of-domain DUC compare with NUS. Since our method doesn't fine-tune to a corpus of a specific domain, it's better than supervised methods based on Sequence-to-Sequence.
0 Replies

Loading