Let's reward step by step: Step-Level reward model as the Navigators for Reasoning

24 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Large Launage model, Process-Supervised Reward Model, Multi-step Reasoning
TL;DR: A process-supervised reward model based heuristic greedy search algorithm for large language models multi-step reasoning
Abstract: Recent years have seen considerable advancements in multi-step reasoning by Large Language Models (LLMs). Numerous studies elucidate the merits of integrating feedback or search mechanisms to augment reasoning outcomes. The Process-Supervised Reward Model (PRM), typically furnishes LLMs with step-by-step feedback during the training phase, akin to Proximal Policy Optimization (PPO) or reject sampling. Our objective is to examine the efficacy of PRM in the reasoning phase and to discern optimal implementation methods. To this end, we have devised a heuristic greedy search algorithm that employs step-level feedback from PRM, aiming to optimize the reasoning pathways explored by LLMs. Our tailored PRM demonstrated enhanced results compared to the Chain of Thought (CoT) on mathematical benchmarks like GSM8K and MATH. To explore the versatility of our methodology, we formulated a PRM dataset specifically for coding tasks and observed improved performance in the code generation task HumanEval, highlighting the promising, robust potential of our approach in a variety of reasoning tasks.
Primary Area: general machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9154
Loading