Enhancing Reasoning through Process Supervision with Monte Carlo Tree Search

AAAI 2025 Workshop NeurMAD Submission15 Authors

09 Dec 2024 (modified: 14 Jan 2025)AAAI 2025 Workshop NeurMAD SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models; Reasoning; Process Supervision; Monte Carlo Tree Search
TL;DR: This work uses Monte Carlo Tree Search to generate step-by-step reasoning data with LLMs, then uses that data to train LLMs to reason better on math problems.
Abstract: Large language models (LLMs) have demonstrated their remarkable capacity across a variety of tasks. However, reasoning remains a challenge for LLMs. To improve LLMs’ reasoning ability, process supervision has proven to be better than outcome supervision. In this work, we study using Monte Carlo Tree Search (MCTS) to generate process supervision data with LLMs themselves for training them. We sample reasoning steps with an LLM and assign each step a score that captures its ”relative correctness,” and the LLM is then trained by minimizing weighted log-likelihood of generating the reasoning steps. This generate-then-train process is repeated iteratively until convergence. Our experimental results demonstrate that the proposed methods considerably improve the performance of LLMs on two mathematical reasoning datasets. Furthermore, models trained on one dataset also exhibit improved performance on the other, showing the transferability of the enhanced reasoning ability.
Submission Number: 15
Loading