Abstract: Reinforcement learning with verifiable rewards (RLVR) has emerged as the leading approach for enhancing reasoning capabilities in large language models. However, it faces a fundamental compute and memory asymmetry: rollout generation is embarrassingly parallel and memory-light, whereas policy updates are communication-heavy and memory-intensive. To address this, we introduce **PODS** (**P**olicy **O**ptimization with **D**own-**S**ampling), which decouples rollout generation from policy updates by training only on a strategically selected subset of rollouts, maintaining learning quality while dramatically reducing update costs. We propose a principled subset selection criterion—*max-variance down-sampling*—that maximizes the variance of reward in the selected subset, and provide an efficient $O(n\log n)$ implementation of this rule. Empirically, Group Relative Policy Optimization (GRPO) with PODS achieves the peak test accuracy of vanilla GRPO at least $\mathbf{1.7\times}$ **faster** across the different reasoning benchmarks and hardware configurations we tested.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Romain_Laroche1
Submission Number: 6708
Loading