Calibrated Reasoning: An Explanatory Verifier for Dynamic and Efficient Problem-Solving

Published: 16 Oct 2025, Last Modified: 10 Nov 2025NeurIPS 2025 ER Workshop SpotlightEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Verifier, RL, GRPO, Test-time reasoning, large reasoning models
TL;DR: Enhancing test-time scaling of reasoning models using a verifier custom-trained for inference-time compute.
Abstract: Advanced test-time computing strategies are essential for scaling reasoning models, but their effectiveness is capped by the models' poor self-evaluation. We propose a pairwise Explanatory Verifier, trained via reinforcement learning (GRPO), that produces calibrated confidence scores and associated natural language reasoning for generated solutions. Our verifier improves the accuracy and efficiency of test-time strategies like best-of-n and self-reflection. Crucially, it excels at identifying challenging failure modes, such as when both candidate solutions are identically incorrect, succeeding where standard methods like majority voting fail.
Submission Number: 134
Loading