TL;DR: This paper introduces DPSDP, a reinforcement learning algorithm that trains an actor-critic LLM system to iteratively refine answers via direct preference learning on self-generated data.
Abstract: Leveraging more test-time computation has proven to be an effective way to boost the reasoning capabilities of large language models (LLMs). Among various methods, the verify-and-improve paradigm stands out for enabling dynamic solution exploration and feedback incorporation. However, existing approaches often suffer from restricted feedback spaces and lack of coordinated training of different parties, leading to suboptimal performance. To address this, we model this multi-turn refinement process as a Markov Decision Process and introduce DPSDP (**D**irect **P**olicy **S**earch by **D**ynamic **P**rogramming), a reinforcement learning algorithm that trains an actor-critic LLM system to iteratively refine answers via direct preference learning on self-generated data. Theoretically, DPSDP can match the performance of any policy within the training distribution. Empirically, we instantiate DPSDP with various base models and show improvements on both in- and out-of-distribution benchmarks. For example, on benchmark MATH 500, majority voting over five refinement steps increases first-turn accuracy from 58.2% to 63.2% with Ministral-based models. An ablation study further confirms the benefits of multi-agent collaboration and out-of-distribution generalization.
Lay Summary: Large language models (LLMs) like ChatGPT can solve complex tasks but often struggle to fix their own mistakes. This paper introduces DPSDP, a method that helps LLMs reflect, take feedback, and refine their answers—much like how students learn from reviewing errors.
Instead of one model doing everything, DPSDP trains two specialized models: an actor that proposes answers and a critic that gives feedback. They go back and forth over several rounds, improving the response each time. The final answer is chosen by majority vote.
By using reinforcement learning to train this collaboration, the method significantly boosts accuracy on math problems—including Olympiad-level ones—and generalizes well to new tasks. This shows how structured reflection and teamwork can make AI reason more like humans.
Primary Area: Deep Learning->Large Language Models
Keywords: Post-training, LLM-based multi-agents, Reinforcement learning, Mathematical reasoning
Submission Number: 12870
Loading