Language Models That Think, Chat Better

Published: 23 Sept 2025, Last Modified: 07 Dec 2025FoRLM 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: language model, post-training, reinforcement learning, thinking, test-time scaling, GRPO
TL;DR: We teach language models to think generally - leading to improvements on a range of tasks.
Abstract: Reinforcement learning with verifiable rewards (RLVR) improves language model reasoning by using rule-based rewards in verifiable domains such as mathemat- ics and code. However, RLVR leads to limited generalization for open-ended tasks—such as writing essay outlines or making meal plans—where humans reason routinely. This paper shows that the RLVR paradigm is effective beyond verifiable domains, and introduces **RL with Model-rewarded Thinking (RLMT)** for general- purpose chat capabilities. Using diverse real-world prompts, RLMT requires LMs to generate long CoT reasoning before responding, and optimizes them with online RL against a preference-based reward model used in RLHF. Across 40 training runs on Llama-3.1-8B and Qwen-2.5-7B (both base and instruct) and multiple optimization algorithms (DPO, PPO, and GRPO), RLMT consistently outperforms standard RLHF pipelines. This includes substantial gains of 3–7 points on three chat benchmarks (AlpacaEval2, WildBench, and ArenaHardV2), along with 1–3 point improvements on other tasks like creative writing and general knowledge. Our best 8B model surpasses GPT-4o in chat and creative writing and rivals Claude- 3.7-Sonnet (Thinking). RLMT can also be applied directly to base models without an SFT stage, akin to DeepSeek-R1-Zero training. Remarkably, with only 7K prompts, Llama-3.1-8B base trained with our RLMT recipe outperforms Llama-3.1-8B-Instruct post-trained with a complex multi-staged pipeline with 25M+ examples. We close with qualitative and quantitative analyses of how trained models plan their responses. Our results rethink the post-training pipeline and call upon future work to understand and employ thinking more broadly.
Submission Number: 91
Loading