SummExecEdit: A Factual Consistency Benchmark in Summarization with Executable Edits

ACL ARR 2024 December Submission489 Authors

14 Dec 2024 (modified: 05 Feb 2025)ACL ARR 2024 December SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Detecting factual inconsistencies in summarization is critical, yet existing benchmarks lack the necessary challenge and interpretability for robust evaluation. In this paper, we introduce SummExecEdit, a novel benchmark leveraging executable edits to assess models on their ability to both detect factual errors and provide accurate explanations. The top-performing model, Claude3-Opus, achieves a joint detection and explanation score of only 0.49 in our benchmark, with individual scores of 0.67 for detection and 0.73 for explanation. Furthermore, we identify four primary types of explanation errors, with 45.4\% of errors focusing on completely unrelated parts of the summary.
Paper Type: Short
Research Area: Resources and Evaluation
Research Area Keywords: Summarization, Human-Centered NLP, Generation
Contribution Types: Model analysis & interpretability, Data resources
Languages Studied: English
Submission Number: 489
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview