Valid ≠ Necessary: Diagnosing Latent Inefficiency in Chain-of-Thought

ACL ARR 2026 January Submission9135 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Over-reasoning, Reasoning Efficiency
Abstract: Chain-of-Thought (CoT) prompting has significantly advanced the reasoning capabilities of Large Language Models (LLMs), yet it often incurs substantial computational costs due to "over-reasoning"---the generation of redundant, verbose, or irrelevant steps. While existing reasoning step evaluators effectively detect logical fallacies and factual errors, our analysis reveals a critical blind spot: they fail to penalize "valid but inefficient" reasoning steps that inflate token usage without contributing to the solution. To systematically diagnose this limitation, we introduce RIV-GSM8K, a diagnostic benchmark injected with five distinct types of inefficiencies, including circular reasoning and excessive decomposition. Diagnostic experiments reveal that state-of-the-art evaluators struggle to distinguish these inefficiencies from necessary reasoning. To address this, we propose CAID (Context-Aware Information Density), a training-free metric grounded in information theory that effectively identifies low-utility steps. To validate the metric's practical utility, we apply it within PACE, a post-hoc compression strategy. Empirical results on GSM8K, StrategyQA, and ARC-Challenge demonstrate that PACE reduces token consumption by 31–53% while maintaining reasoning accuracy, confirming that CAID successfully distills informational "froth" from reasoning chains without compromising deductive validity.
Paper Type: Long
Research Area: Mathematical, Symbolic, Neurosymbolic, and Logical Reasoning
Research Area Keywords: Mathematical, Symbolic, and Logical Reasoning, Language Modeling, Resources and Evaluation
Contribution Types: NLP engineering experiment, Data resources
Languages Studied: English
Submission Number: 9135
Loading