On the Optimization Dynamics of RLVR: Gradient Gap and Step Size Thresholds

ICLR 2026 Conference Submission18511 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reinforcement Learning with Verifiable Rewards, Post-Training of Large Language Models, Policy Gradient Methods, Optimization Dynamics, Step Size Scaling, Gradient Gap
TL;DR: We develop a theory for RLVR, showing that successful training hinges on aligning updates with a “Gradient Gap” and carefully scaling the step size, which explains practical heuristics like length normalization and predicts when training may fail.
Abstract: Reinforcement Learning with Verifiable Rewards (RLVR), which uses simple binary feedback to post-train large language models, has shown significant empirical success. However, a principled understanding of why it works has been lacking. This paper builds a theoretical foundation for RLVR by analyzing its training process at both the full-response (trajectory) and token levels. Central to our analysis is a quantity called the Gradient Gap, which formalizes the direction of improvement from low-reward to high-reward regions of the response space. We prove that convergence critically depends on aligning the update direction with this Gradient Gap. Moreover, we derive a sharp step-size threshold based on the magnitude of the Gradient Gap: below it, learning converges, whereas above it, performance collapses. Our theory further predicts how the critical step size must scale with response length and the success rate, thereby explaining why practical heuristics such as length normalization improve stability and showing that, with a fixed learning rate, the success rate can stagnate strictly below 100%. We validate these predictions through controlled bandit simulations and LLM experiments, including training Qwen2.5-7B with GRPO.
Supplementary Material: pdf
Primary Area: learning theory
Submission Number: 18511
Loading