Keywords: LLMs reasoning, Reinforcement Learning, Unsupervised Learning
TL;DR: This paper introduces CPMobius, a data-free reinforcement learning paradigm where a "Coach" and a "Player" model collaborate to improve LLMs reasoning without any external training data.
Abstract: Large Language Models (LLMs) have demonstrated strong potential in complex reasoning, yet their progress remains fundamentally constrained by reliance on massive high-quality human-curated tasks and labels, either through supervised fine-tuning (SFT) or reinforcement learning (RL) on reasoning-specific data. This dependence renders supervision-heavy training paradigms increasingly unsustainable, with signs of diminishing scalability already evident in practice. To overcome this limitation, we introduce \framework, a collaborative \textbf{Coach–Player} paradigm for data-free reinforcement learning of reasoning models. Unlike traditional adversarial self-play frameworks, \framework inspired by multi-agent collaboration treats the Coach and Player as independent but cooperative roles. The Coach proposes instructions targeted at the Player’s capability and receives rewards based on changes in the Player’s performance, while the Player is rewarded for solving the increasingly instructive tasks generated by the Coach. This cooperative optimization loop is designed to directly enhance the Player’s mathematical reasoning ability. Remarkably, \framework achieves substantial improvement without relying on any external training data, outperforming existing unsupervised approaches. For example, on the Qwen2.5-Math-7B-Instruct, our method improves accuracy by overall average +4.9 and out-of-distribution average +5.4, which exceed RENT for +1.5 on overall accuracy and R-zero for +4.2 on OOD accuracy.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 19346
Loading