Unsupervised Compressive Text Summarisation with Reinforcement LearningDownload PDF

Anonymous

17 Dec 2021 (modified: 05 May 2023)ACL ARR 2021 December Blind SubmissionReaders: Everyone
Abstract: Recently, compressive text summarisation offers a balance between the conciseness issue of extractive summarisation and the factual hallucination issue of abstractive summarisation. However, most existing compressive summarisation methods are supervised, relying on the expensive effort of creating a new training dataset with corresponding compressive summaries. In this paper, we propose an unsupervised compressive summarisation method that utilises reinforcement learning to optimise a summary's semantic coverage and fluency by simulating human judgment on summarisation quality. Our model consists of an extractor agent and a compressor agent, and both agents have a multi-head attentional pointer-based structure. The extractor agent first chooses salient sentences from a document, and then the compressor agent compresses these extracted sentences by selecting salient words to form a summary without using reference summaries to compute the summary reward. That is, a parallel dataset with document-summary pairs is not required to train the proposed model. To the best of our knowledge, our proposed method is the first work on unsupervised compressive summarisation. Experimental results on three widely used datasets, Newsroom, CNN/DM, and XSum, show that our model achieves promising performance and significant improvement on Newsroom in terms of the ROUGE metric.
Paper Type: long
Consent To Share Data: yes
0 Replies

Loading