RAPID: An Efficient Reinforcement Learning Algorithm for Small Language Models

ICLR 2026 Conference Submission19758 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: small language models, efficient training, reinforcement learning
TL;DR: We introduce a novel algorithm to accelerate reinforcement learning for small language models.
Abstract: Reinforcement learning (RL) has emerged as a promising strategy for finetuning small language models (SLMs) to solve targeted tasks such as math and coding. However, RL algorithms tend to be resource-intensive, taking a significant amount of time to train. We propose RAPID, a novel RL algorithm that can substantially reduce the running time of RL. Our key insight is that RL tends to be costly due to the need to perform both inference and backpropagation during training. To maximize use of computational resources, our algorithm performs inference in large batches, and then performs off-policy policy gradient updates in mini-batches. For off-policy updates, we incorporate group advantage estimation into the policy gradient algorithm, and derive an importance weighted estimator to correct for the bias arising from off-policy learning. Our experiments demonstrate that our algorithm can reduce running time by 11\%--34\% on three benchmarks compared to state-of-the-art RL algorithms while maintaining similar or better accuracy.
Primary Area: reinforcement learning
Submission Number: 19758
Loading