More Bang for the Buck: Process Reward Modeling with Entropy-Driven Uncertainty

ICLR 2026 Conference Submission18162 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: PRM;Process Rewards Model;Entropy
TL;DR: Entropy-Driven Uncertainty Process Reward Model automatically segments complex reasoning steps via entropy-based uncertainty, eliminating manual annotation and outperforming existing baselines on the test dataset.
Abstract: We introduce the Entropy-Driven Uncertainty Process Reward Model (EDU-PRM), a novel entropy-driven training framework for process reward modeling that enables dynamic, unce rtainty-aligned segmentation of complex reasoning steps, eliminating the need for costly manual step annotations. Unlike previous Process Reward Models (PRMs) that rely on static partitioning and human labeling, EDU‑PRM automatically anchors step boundaries at tokens with high predictive entropy, effectively capturing intrinsic logical transitions and facilitating efficient exploration of diverse reasoning paths. On the ProcessBench benchmark, EDU-PRM outperforms strong public PRM baselines, such as Math-Shepherd PRM and Omega PRM, and EDU-PRM achieves comparable results with SOTA models while only using 1.5\% training data. Furthermore, by leveraging our proposed EDU sampling strategy, we observe accuracy boosts from 64.7\% to 67.3\% for generative reasoning tasks, accompanied by a reduction of 32\% in token usage. These findings underscore the potential of EDU-PRM as a scalable and annotation-efficient paradigm for process supervision in mathematical reasoning, paving the way for more efficient and robust approaches to complex mathematical problem solving.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 18162
Loading