Simple Policy Optimization

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We introduce a novel model-free reinforcement learning algorithm.
Abstract: Model-free reinforcement learning algorithms have seen remarkable progress, but key challenges remain. Trust Region Policy Optimization (TRPO) is known for ensuring monotonic policy improvement through conservative updates within a trust region, backed by strong theoretical guarantees. However, its reliance on complex second-order optimization limits its practical efficiency. Proximal Policy Optimization (PPO) addresses this by simplifying TRPO's approach using ratio clipping, improving efficiency but sacrificing some theoretical robustness. This raises a natural question: Can we combine the strengths of both methods? In this paper, we introduce Simple Policy Optimization (SPO), a novel unconstrained first-order algorithm. By slightly modifying the policy loss used in PPO, SPO can achieve the best of both worlds. Our new objective improves upon ratio clipping, offering stronger theoretical properties and better constraining the probability ratio within the trust region. Empirical results demonstrate that SPO outperforms PPO with a simple implementation, particularly for training large, complex network architectures end-to-end.
Lay Summary: We propose an improved version of the well-known Proximal Policy Optimization (PPO) algorithm in reinforcement learning (RL), called Simple Policy Optimization (SPO), which has been demonstrated to be more stable.
Link To Code: https://github.com/MyRepositories-hub/Simple-Policy-Optimization
Primary Area: Reinforcement Learning->Online
Keywords: Model-Free Reinforcement Learning, Policy Optimization
Submission Number: 8382
Loading