Keywords: Reinforcement learning; LLM reasoning; Training acceleration
Abstract: Reinforcement Learning with Verifiable Rewards (RLVR) has significantly advanced the reasoning capabilities of Large Language Models (LLMs). However, methods such as GRPO and DAPO suffer from substantial computational cost, since they rely on sampling many rollouts for each prompt. Moreover, in RLVR the relative advantage is often sparse: many samples become nearly all-correct or all-incorrect, yielding low within-group reward variance and thus weak learning signals. In this paper, we introduce ARRoL (**A**ccelerating **R**LV**R** via **o**nline Ro**L**lout Pruning), an online rollout pruning method that prunes rollouts during generation while explicitly steering the surviving ones more correctness-balanced to enhance learning signals. Specifically, ARRoL trains a lightweight quality head on-the-fly to predict the success probability of partial rollouts and uses it to make early pruning decisions. The learned quality head can further weigh candidates to improve inference accuracy during test-time voting. To improve efficiency, we present a system design that prunes rollouts inside the inference engine and re-batches the remaining ones for log-probability computation and policy updates. Across GRPO and DAPO on Qwen-3 and LLaMA-3.2 models (1B-8B), ARRoL improves average accuracy by $+2.30$ to $+2.99$ while achieving up to $1.7\times$ training speedup, and yielding up to $+8.33$ additional gains in average accuracy in test-time voting.
Paper Type: Long
Research Area: Language Models
Research Area Keywords: Language Modeling, NLP Applications
Languages Studied: English
Submission Number: 3944
Loading