Keywords: Reinforcement Learning, Large Language Models, Post training, Degenerate MDP
Abstract: Reinforcement learning based post-training of large language models (LLMs) has recently gained attention, particularly following the release of DeepSeek R1, which applied GRPO for fine-tuning. Amid the growing hype around improved reasoning abilities attributed to RL post-training, we critically examine the formulation and assumptions underlying these methods. We start by highlighting popular structural assumptions made in modeling LLM training as an MDP, and show how they lead to a degenerate MDP that doesn’t quite need the RL/GRPO apparatus. The two critical structural assumptions include (1) making the MDP states be just a concatenation of the actions with states becoming the context window and the actions becoming the tokens in LLMs and (2) splitting the reward of a state-action trajectory uniformly across the trajectory. Our comprehensive analysis demonstrates that, due to these simplifying assumptions, the standard approach is effectively equivalent to outcome-driven supervised learning. Our experiments on benchmarks including GSM8K and Countdown using Qwen-2.5 base models and Llama-3.2 instruct models show that Filtered Iterative SFT, incorporating both positive and negative samples, achieves performance comparable to GRPO-based training. We also show that these structural assumptions indirectly incentivize RL to generate longer sequences of intermediate tokens which in turn feeds into the narrative of “RL incentivizing thinking because it generates longer thinking traces.” We continue to believe that RL writ large can be a great tool for post-training LLMs, and hope that our analysis of the limiting assumptions in the currently popular RL framework encourages work that goes beyond patching the symptoms of faulty assumptions and towards improving the underlying formulation.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 20597
Loading