Keywords: Language Models, Reinforcement Learning, Code Generation
Abstract: Modern code generation models exhibit longer outputs, accelerated capability growth, and fundamentally changed training dynamics, rendering traditional training methodologies, algorithms, and datasets ineffective for enhancing their performance. To address these training bottlenecks, we propose MicroCoder-GRPO, an enhanced Group Relative Policy Optimization approach with three key innovations: conditional truncation masking to enhance long output potential while maintaining training stability, diversity-determined temperature selection to maintain and encourage output diversity, and removal of KL loss with high clipping ratios to facilitate exploration. MicroCoder-GRPO achieves up to 17.6% relative improvement over strong baselines on LiveCodeBench v6, with more pronounced gains under extended context evaluation. Additionally, we release MicroCoder-Dataset, a more challenging training corpus that achieves 3× larger performance gains than mainstream datasets on LiveCodeBench v6 within 300 training steps, and MicroCoder-Evaluator, a robust framework with approximately 25% improved evaluation accuracy and around 40% faster execution. Through comprehensive analysis across more than thirty controlled experiments, we reveal 34 key training insights across seven main aspects, demonstrating that properly trained models can achieve competitive performance with larger counterparts.
Primary Area: reinforcement learning
Submission Number: 16185
Loading