State-wise Constrained Policy Optimization

Published: 09 Apr 2024, Last Modified: 09 Apr 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Reinforcement Learning (RL) algorithms have shown tremendous success in simulation environments, but their application to real-world problems faces significant challenges, with safety being a major concern. In particular, enforcing state-wise constraints is essential for many challenging tasks such as autonomous driving and robot manipulation. However, existing safe RL algorithms under the framework of Constrained Markov Decision Process (CMDP) do not consider state-wise constraints. To address this gap, we propose State-wise Constrained Policy Optimization (SCPO), the first general-purpose policy search algorithm for state-wise constrained reinforcement learning. SCPO provides guarantees for state-wise constraint satisfaction in expectation. In particular, we introduce the framework of Maximum Markov Decision Process, and prove that the worst-case safety violation is bounded under SCPO. We demonstrate the effectiveness of our approach on training neural network policies for extensive robot locomotion tasks, where the agent must satisfy a variety of state-wise safety constraints. Our results show that SCPO significantly outperforms existing methods and can handle state-wise constraints in high-dimensional robotics tasks.
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/intelligent-control-lab/StateWise_Constrained_Policy_Optimization
Supplementary Material: zip
Assigned Action Editor: ~Dinesh_Jayaraman2
Submission Number: 1875
Loading