Efficient Test-Time Scaling via Temporal Reasoning Aggregation

ACL ARR 2026 January Submission5869 Authors

05 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Test-Time Scaling, Early Exit, Multi-Step Reasoning, Confidence Calibration, Mathematical Inference
Abstract: Test-time scaling improves the reasoning performance of large language models but often results in token-inefficient overthinking, where models continue reasoning beyond what is necessary for a correct answer. Existing dynamic early-exit methods typically rely on single-step confidence signals, which are often unreliable for detecting reasoning convergence in multi-step settings. To mitigate this limitation, we propose TRACE, a training-free framework for efficient test-time scaling that determines when to terminate reasoning based on temporal aggregation of multi-step evidence rather than instantaneous signals. TRACE detects reasoning convergence over time by aggregating two complementary signals across recent reasoning steps: answer consistency, capturing the persistence of predicted answers, and confidence trajectory, modeling the temporal evolution of model confidence. Benefiting from these two factors, TRACE can accurately determine whether the reasoning process has converged, thereby promptly halting inference and effectively avoiding redundant reasoning steps. Extensive experiments on multiple challenging benchmarks show that TRACE reduces reasoning token usage by 25–30\% on average while maintaining accuracy within 1–2\% of full-length reasoning, consistently outperforming existing dynamic reasoning methods.
Paper Type: Long
Research Area: LLM Efficiency
Research Area Keywords: LLM Efficiency, Efficient Inference, Test-Time Scaling, Early Exit, Dynamic Reasoning, Mathematical Reasoning, Logical Reasoning, Confidence Calibration, Multi-Step Reasoning
Contribution Types: NLP engineering experiment, Approaches low compute settings-efficiency
Languages Studied: English
Submission Number: 5869
Loading