Repurposing Decoder-Transformer Language Models for Abstractive SummarizationDownload PDF

Published: 01 Nov 2019, Last Modified: 05 May 2023DI 2019Readers: Everyone
Keywords: summarization, language models, attention, transformers
Abstract: Neural network models have shown excellent fluency and performance when applied to abstractive summarization. Many approaches to neural abstractive summarization involve the introduction of significant inductive bias, such as pointer-generator architectures, coverage, and partially extractive procedures, designed to mimic human summarization. We show that it is possible to attain competitive performance by instead directly viewing summarization as language modeling. We introduce a simple procedure built upon pre-trained decoder-transformers to obtain competitive ROUGE scores using a language modeling loss alone, with no beam-search or other decoding-time optimization, and instead rely on efficient nucleus sampling and greedy decoding.
TL;DR: We introduce a simple procedure to repurpose pre-trained transformer-based language models to perform abstractive summarization well.
1 Reply

Loading