LCFO: Long Context and Long Form Output Dataset and Benchmarking

ACL ARR 2024 December Submission1086 Authors

15 Dec 2024 (modified: 05 Feb 2025)ACL ARR 2024 December SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: This paper presents the Long Context and Form Output (LCFO) benchmark, a novel evaluation framework for assessing gradual summarization and summary expansion capabilities across diverse domains. LCFO consists of long input documents (5k words average length), each of which comes with three summaries of different lengths (20%, 10%, and 5% of the input text), as well as approximately 15 questions and answers (QA) related to the input content. Notably, LCFO also provides alignments between specific QA pairs and corresponding summaries in 7 domains. The primary motivation behind providing summaries of different lengths is to establish a controllable framework for generating long texts from shorter inputs, i.e. summary expansion. To establish an evaluation metric framework for summarization and summary expansion, we provide human evaluation scores for human-generated outputs, as well as results from various state-of-the-art large language models (LLMs). GPT-4o-mini achieves best human scores among automatic systems in both summarization and summary expansion tasks (+10% and +20%, respectively). It even surpasses human output quality in the case of short summaries ( +7%). Overall automatic metrics achieve low correlations with human evaluation scores (approx 0.4) but moderate correlation on specific evaluation aspects such as fluency and attribution (approx 0.6).
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: Long context input, long form output
Contribution Types: Data resources
Languages Studied: English
Submission Number: 1086
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview