Keywords: Reinforcement learning, LLM reasoning, Curriculum learning
TL;DR: We propose a curriculum learning method CurES for LLM reasoning, which is a systematic and theoretical investigation into how to improve the training efficiency of LLMs.
Abstract: Curriculum learning plays a crucial role in enhancing the training efficiency of large language models (LLMs) on reasoning tasks. However, existing methods often fail to adequately account for variations in prompt difficulty or rely on simplistic filtering mechanisms to select prompt datasets within a narrow criterion range, resulting in significant computational waste. In this work, we approach the problem from the perspective of reinforcement learning gradient optimization, offering a systematic and theoretical investigation into how to improve the training efficiency of LLMs. We identify two key factors influencing training efficiency: the selection of training prompts and the allocation of rollout quantities across different prompts. Our theoretical analysis reveals that the sampling distribution of prompts dictates the convergence rate of gradient descent, while the allocation of the rollout quantity influences the consistency and stability of overall gradient updates. Based on these insights, we propose CurES, an efficient training method that accelerates convergence and employs Bayesian posterior estimation to minimize computational overhead. Experiments demonstrate that our CurES outperforms Group Relative Policy Optimization (GRPO) by \textbf{+3.3} points and \textbf{+4.82} points with 1.5B and 7B models, respectively, and exceeds the best prior sample efficient methods by \textbf{+2.12} points on average across eight math reasoning benchmarks. Our CurES also improves convergence speed compare to baselines such as GRPO.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 16902
Loading