On the Hidden Objective Biases of Group-based Reinforcement Learning

ACL ARR 2026 January Submission4804 Authors

05 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, REinforcement Learning, Theoretical Analysis, Group Relative Policy Optimization, AdamW Optimizer, Gradient Bias
Abstract: Group-based reinforcement learning methods, like Group Relative Policy Optimization (GRPO), are widely used nowadays to post-train large language models. Despite their empirical success, they exhibit structural mismatches between reward optimization and the underlying training objective. In this paper, we present a theoretical analysis of GRPO style methods by studying them within a unified surrogate formulation. This perspective reveals recurring properties that affect all the methods under analysis: (i) non-uniform group weighting induces systematic gradient biases on shared prefix tokens; (ii) interactions with the AdamW optimizer make training dynamics largely insensitive to reward scaling; and (iii) optimizer momentum can push policy updates beyond the intended clipping region under repeated optimization steps. We believe that these findings highlight fundamental limitations of current approaches and provide principled guidance for the design of future formulations.
Paper Type: Short
Research Area: Natural Language Generation
Research Area Keywords: Language Modeling
Contribution Types: Position papers, Theory
Languages Studied: English
Submission Number: 4804
Loading