Towards High Data Efficiency in Reinforcement Learning with Verifiable Reward

Published: 26 Jan 2026, Last Modified: 11 Apr 2026ICLR 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Data Efficiency, Reinforcement Learning with Verifiable Reward
TL;DR: We propose a Data-Efficient Policy Optimization pipeline for Reinforcement Learning with Verifiable Reward that integrates optimized strategies for both offline and online data selection, namely DEPO.
Abstract: Recent advances in large language models (LLMs) have utilized reinforcement learning with verifiable rewards (RLVR) to improve reasoning capabilities. However, scaling these methods typically requires massive data and extensive rollout computations, leading to high training costs and low data efficiency. To mitigate this issue, we propose DEPO, a Data-Efficient Policy Optimization approach that combines optimized strategies for both offline and online data selection. In the offline phase, we curate a high-quality subset of training data based on multiple objectives, including diversity, influence, and difficulty. During online RLVR training, we propose a sample-level explorability metric to dynamically filter out samples with low exploration potential, thereby reducing substantial rollout computational costs. Additionally, we employ a replay mechanism for under-explored samples to ensure sufficient training, which enhances the final convergence performance. Experiments on five reasoning benchmarks show that DEPO consistently outperforms existing methods in both offline and online data selection scenarios. Notably, using only 20% of the training data, our approach achieves a 1.85 $\times$ speed-up on AIME24 and a 1.66 $\times$ speed-up on AIME25 compared to GRPO trained on the full dataset.
Primary Area: reinforcement learning
Submission Number: 1876
Loading