SPoRt - Safe Policy Ratio: Certified Training and Deployment of Task Policies in Model-Free RL

Published: 15 Jun 2025, Last Modified: 07 Aug 2025AIA 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: reinforcement learning, safe RL, LTL, policy adaptation, scenario approach, robust control
TL;DR: A model-free RL approach for adapting an existing 'safe' policy to maximize task-specific reward a while maintaining a bound on safety violation known prior to rollout.
Abstract: To apply reinforcement learning to safety-critical applications, we ought to provide safety guarantees during both policy training and deployment. In this work, we present theoretical results that place a bound on the probability of violating a safety property for a new task-specific policy in a model-free, episodic setting. This bound, based on a maximum policy ratio computed with respect to a 'safe' base policy, can also be applied to temporally-extended properties (beyond safety) and to robust control problems. To utilize these results, we introduce SPoRt, which provides a data-driven method for computing this bound for the base policy using the scenario approach, and includes Projected PPO, a new projection-based approach for training the task-specific policy while maintaining a user-specified bound on property violation. SPoRt thus enables users to trade off safety guarantees against task-specific performance. Complementing our theoretical results, we present experimental results demonstrating this trade-off and comparing the theoretical bound to posterior bounds derived from empirical violation rates.
Paper Type: Previously Published Paper
Venue For Previously Published Paper: IJCAI 2025
Submission Number: 11
Loading