Abstract: Large Language Models (LLMs) have demonstrated strong reasoning abilities through supervised fine-tuning and reinforcement learning.
However, existing Process Reward Models (PRMs) are vulnerable to reward hacking and require expensive, large-scale annotation of reasoning steps, limiting their reliability and scalability.
To address the first problem, we propose a novel reward model approach, Hierarchical Reward Model (HRM), which evaluates both individual and consecutive reasoning steps from fine-grained and coarse-grained level. HRM excels at assessing multi-step reasoning coherence, particularly in cases where a flawed step is later corrected through self-reflection.
Furthermore, to address the inefficiency of autonomously annotating PRM training data via Monte Carlo Tree Search (MCTS), we propose a lightweight data augmentation strategy, Hierarchical Node Compression (HNC), which merges consecutive reasoning steps within the tree structure. Applying HNC to MCTS-generated reasoning trajectories increases the diversity and robustness of HRM training data, while introducing controlled noise with minimal computational overhead.
Empirical results on the PRM800K dataset demonstrate that HRM, in conjunction with HNC, achieves superior stability and reliability in evaluation compared to PRM. Furthermore, cross-domain evaluations on MATH500 and GSM8K dataset confirm HRM’s superior generalization and robustness across diverse reasoning tasks.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: Process Reward Model, MCTS, LLM
Contribution Types: NLP engineering experiment, Reproduction study, Approaches low compute settings-efficiency
Languages Studied: English
Submission Number: 348
Loading