Optimal Transport Perturbations for Safe Reinforcement Learning with Robustness Guarantees

Published: 25 Mar 2024, Last Modified: 25 Mar 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Robustness and safety are critical for the trustworthy deployment of deep reinforcement learning. Real-world decision making applications require algorithms that can guarantee robust performance and safety in the presence of general environment disturbances, while making limited assumptions on the data collection process during training. In order to accomplish this goal, we introduce a safe reinforcement learning framework that incorporates robustness through the use of an optimal transport cost uncertainty set. We provide an efficient implementation based on applying Optimal Transport Perturbations to construct worst-case virtual state transitions, which does not impact data collection during training and does not require detailed simulator access. In experiments on continuous control tasks with safety constraints, our approach demonstrates robust performance while significantly improving safety at deployment time compared to standard safe reinforcement learning.
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/jqueeney/robust-safe-rl
Assigned Action Editor: ~Aleksandra_Faust1
Submission Number: 1995
Loading