Contour: Penalty and Spotlight Mask for Abstractive Summarization

Published: 01 Jan 2021, Last Modified: 14 May 2025ACIIDS (Companion) 2021EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The act of transferring accurate information presented as proper nouns or exclusive phrases from the input document to the output summary is the requirement for Abstractive Summarization task. To address this problem, we propose Contour to emphasize the most suitable word in the original document that contains the crucial information in each predict step. Contour contains two independent parts: Penalty and Spotlight. Penalty helps to penalize inapplicable words in both training and inference time. Spotlight is to increase the potential of important related words. We examined Contour on multiple types of datasets and languages, which are large-scale (CNN/DailyMail) for English, medium-scale (VNTC-Abs) for Vietnamese, and small-scale (Livedoor News Corpus) for Japanese. Contour not only significantly outperforms baselines by all three Rouge points but also accommodates different datasets.
Loading