Keywords: reinforcement learning, policy optimization, SAPO, PPO, smooth clipping, gate functions, policy gradients, importance sampling, optimization stability, temperature scaling
TL;DR: We propose and analyze alternative smooth gate functions for SAPO that control policy updates via different gradient decay behaviors, improving stability over hard clipping.
Abstract: Group Relative Policy Optimization (GRPO) has significantly advanced the training of large language models and enhanced their reasoning capabilities, while it remains susceptible to instability due to the use of hard clipping. Soft Adaptive Policy Optimization (SAPO) addresses this limitation by replacing clipping with a smooth sigmoid-based gate function, which leads to more stable updates. We push this theory further and investigate the impact of different gate functions on both training stability and final model performance. We formalize the key properties that admissible gates should satisfy and propose several families of such functions for empirical evaluation. This paper presents an analysis of our findings based on experiments conducted with the Qwen2.5-7B-Instruct model on mathematical reasoning tasks. These results provide practical guidance for designing smoother and more robust policy optimization objectives for large language model training.
Submission Number: 56
Loading