Abstract: Process supervision enhances the performance of large language models (LLMs) in reasoning tasks by providing feedback at each step of chain-of-thought reasoning. However, even advanced LLMs are prone to redundant reasoning due to the lack of effective process supervision methods. We claim that the effectiveness of process supervision significantly depends on both the accuracy and the length of reasoning chains. Moreover, we identify that these factors exhibit a nonlinear relationship with the overall reward score of the reasoning process. Based on this, we propose a dual-dimensional nonlinear process supervision method, named PSPO*, which systematically outlines the workflow from reward model training to policy optimization, and highlights the importance of nonlinear rewards in process supervision. Based on PSPO*, we develop the PSPO-WRS, which considers the number of reasoning steps in determining reward scores and utilizes an adjusted Weibull distribution for nonlinear reward shaping. Experimental results on mathematical reasoning datasets demonstrate that PSPO-WRS consistently outperforms current mainstream models.
Paper Type: Long
Research Area: Machine Learning for NLP
Research Area Keywords: logical reasoning; reinforcement learning; math QA
Contribution Types: Publicly available software and/or pre-trained models
Languages Studied: English, Chinese
Submission Number: 6980
Loading