Keywords: llm, grop, rlhf
TL;DR: On the Theory and Practice of GRPO: A Trajectory-Corrected Approach with Fast Convergence
Abstract: Group Relative Policy Optimization (GRPO), recently introduced by DeepSeek, is a critic-free reinforcement learning algorithm for fine-tuning large language models. GRPO replaces the value function in Proximal Policy Optimization (PPO) with group-normalized rewards while retaining PPO-style token-level importance sampling based on an old policy. We show that the GRPO update rule actually estimates the policy gradient at the old policy rather than the current one; however, because the old policy is refreshed every few steps, the gap remains small and the resulting bias is negligible in practice. To validate this, we perform an ablation study that removes importance sampling entirely and instead applies gradients estimated at a fixed old policy across multiple optimization steps. Remarkably, this simplified approach achieves performance comparable to standard GRPO.
Motivated by these findings, we propose a new algorithm: Trajectory level Importance Corrected GRPO (TIC-GRPO). TIC-GRPO replaces token level importance ratios with a single trajectory level probability ratio, yielding an unbiased estimate of the current policy gradient while preserving the critic free structure. Furthermore, we present the first theoretical convergence analysis for GRPO style methods, covering both the original GRPO and our proposed variant.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 3546
Loading