Improving Neural Abstractive Summarization Using Transfer Learning and Factuality-Based Evaluation: Towards Automating Science JournalismDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Withdrawn SubmissionReaders: Everyone
Keywords: neural abstractive summarization, transfer learning, multitask learning, natural language processing
TL;DR: New: application of seq2seq modelling to automating sciene journalism; highly abstractive dataset; transfer learning tricks; automatic evaluation measure.
Abstract: We propose Automating Science Journalism (ASJ), the process of producing a press release from a scientific paper, as a novel task that can serve as a new benchmark for neural abstractive summarization. ASJ is a challenging task as it requires long source texts to be summarized to long target texts, while also paraphrasing complex scientific concepts to be understood by the general audience. For this purpose, we introduce a specialized dataset for ASJ that contains scientific papers and their press releases from Science Daily. While state-of-the-art sequence-to-sequence (seq2seq) models could easily generate convincing press releases for ASJ, these are generally nonfactual and deviate from the source. To address this issue, we improve seq2seq generation via transfer learning by co-training with new targets: (i) scientific abstracts of sources and (ii) partitioned press releases. We further design a measure for factuality that scores how pertinent to the scientific papers the press releases under our seq2seq models are. Our quantitative and qualitative evaluation shows sizable improvements over a strong baseline, suggesting that the proposed framework could improve seq2seq summarization beyond ASJ.
Original Pdf: pdf
5 Replies

Loading