TED: A Pretrained Unsupervised Summarization Model with Theme Modeling and DenoisingDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
Keywords: text summarization, unsupervised learning, natural language processing
TL;DR: A new state-of-the-art for unsupervised abstractive text summarization
Abstract: Text summarization aims to extract essential information from a piece of text and transform it into a concise version. Existing unsupervised abstractive summarization models use recurrent neural networks framework and ignore abundant unlabeled corpora resources. In order to address these issues, we propose TED, a transformer-based unsupervised summarization system with dataset-agnostic pretraining. We first leverage the lead bias in news articles to pretrain the model on large-scale corpora. Then, we finetune TED on target domains through theme modeling and a denoising autoencoder to enhance the quality of summaries. Notably, TED outperforms all unsupervised abstractive baselines on NYT, CNN/DM and English Gigaword datasets with various document styles. Further analysis shows that the summaries generated by TED are abstractive and containing even higher proportions of novel tokens than those from supervised models.
Code: https://drive.google.com/file/d/17pp6coa19oOTbW3JEXlS_WMb7vjcCGWJ/view?usp=sharing
Original Pdf: pdf
8 Replies

Loading