Keywords: Large Language Models; Reinforcement Learning; LLM Reasoning
Abstract: Reinforcement learning for large language models (LLMs) often relies on scalar rewards, a practice that discards valuable textual rationale buried in the rollouts and hampers training efficiency. Naive attempts to incorporate language feedback are often counterproductive, risking either memorization from leaked solutions or policy collapse from irrelevant context. To address this, we propose Language-And-Numerical Policy Optimization (LANPO), a framework that cleanly separates the roles of feedback: language guides exploration, while numerical rewards drive optimization. LANPO builds a dynamic experience pool from past trials and introduces two principles to ensure feedback is effective: Reward-Agnostic Reflection for safe intra-sample self-correction and Relevant Abstraction to distill generalizable lessons from inter-sample experiences. Across mathematical reasoning benchmarks, LANPO enables 7B and 14B models to significantly outperform strong baselines trained with GRPO in test accuracy. Our work provides a robust method for integrating historical experiences into the LLM RL loop, creating more effective and data-efficient learning agents.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 15838
Loading