On the Role of Summary Content Units in Text Summarization EvaluationDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
TL;DR: We test novel ways to approximate summary content units and assess their general value through extrinsic and intrinsic evaluation.
Abstract: At the heart of the pyramid evaluation method for text summarization lie human written summary content units (SCUs). These SCUs are concise sentences that decompose a summary into small facts. Such SCUs can be used to judge the quality of a candidate summary, possibly partially automated via natural language inference (NLI) systems. Interestingly, with the aim to fully automate the pyramid evaluation, Zhang and Bansal (2021) show that SCUs can be approximated from parsed semantic role triplets (STUs). However, several questions currently lack answers, in particular i) Are there other ways of approximating SCUs that can offer advantages? ii) Under which conditions are SCUs (or their approximations) offering the most value? In this work, we examine two novel strategies to approximate SCUs: generating SMUs from meaning representations and SGUs from large language generation models (LLMs). We find that while STUs and SMUs are competitive, the best approximation quality is achieved by SGUs. We also show through a simple sentence-decomposition baseline (SSUs) that SCUs (and their approximation) offer the most value when ranking short summaries, but may not help as much when ranking systems or longer summaries.
Paper Type: short
Research Area: Resources and Evaluation
Contribution Types: NLP engineering experiment, Data analysis
Languages Studied: English
0 Replies

Loading