Do Automatic Factuality Metrics Measure Factuality? A Critical Evaluation

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Summarization, Faithfulness metrics
TL;DR: This work stress-tests factuality metrics, showing they mostly perform well in cases that shallow models do and struggle with complex examples, fail to reward factual corrections, and are easily gamed, raising reliability concerns.
Abstract: Modern LLMs can now produce highly readable abstractive summaries, to the point that traditional automated metrics for evaluating summary quality, such as ROUGE, have saturated. However, LLMs still sometimes introduce inaccuracies into summaries, i.e., information inconsistent with or unsupported by the corresponding source. Measuring the occurrence of these often subtle factual inconsistencies automatically has proved challenging. This in turn has motivated development of metrics intended to measure the factual consistency of generated summaries against sources. But are these approaches measuring what they purport to? Or are they mostly exploiting artifacts? In this work, we stress test a range of automatic factuality metrics—including specialized model-based approaches and LLM-based prompting methods—to probe what they actually capture. Using a shallow classifier to separate “easy” examples for factual evaluation—where surface features suffice—from “hard” cases requiring deeper reasoning, we find that all metrics show substantial performance drops on the latter. Furthermore, some metrics are more sensitive to benign, fact-preserving edits than to factual corrections. Building on this observation, we demonstrate that most automatic factuality metrics can be gamed—that is, their scores can be artificially inflated by appending innocuous, content-free sentences to summaries. Among the metrics tested, the LLM prompt-based ChatGPT-DA approach is the most robust and reliable; however, it exhibits a notable caveat: it likely relies more on parametric knowledge than on the provided source when making judgments. Taken together, our findings call into question the reliability of current factuality metrics and prompt a broader reflection on what these metrics are truly measuring. We conclude with concrete recommendations for improving both benchmark design and metric robustness, particularly in light of their vulnerability to superficial manipulations.
Primary Area: Evaluation (e.g., methodology, meta studies, replicability and validity, human-in-the-loop)
Submission Number: 21962
Loading