Sim-to-Lab-to-Real: Safe RL with Shielding and Generalization GuaranteesDownload PDF

Published: 27 Apr 2022, Last Modified: 05 May 2023ICLR 2022 GPL OralReaders: Everyone
Keywords: Generalization, Reinforcement Learning, Sim-to-Real Transfer, Safety Analysis
TL;DR: We propose Sim-to-Lab-Real, a framework combining Hamilton-Jacobi reachability analysis and PAC-Bayes Control, to improve robot safety during training and deployment, and provide generalization guarantees on performance and safety in the real world.
Abstract: Safety is a critical component of autonomous systems and remains a challenge for learning-based policies to be utilized in the real world. In this paper, we propose Sim-to-Lab-to-Real to safely close the reality gap. To improve safety, we apply a dual policy setup where a performance policy is trained using the cumulative task reward and a backup (safety) policy is trained by solving the safety Bellman Equation based on Hamilton-Jacobi reachability analysis. In Sim-to-Lab transfer, we apply a supervisory control scheme to shield unsafe actions during exploration; in Lab-to-Real transfer, we leverage the Probably Approximately Correct (PAC)-Bayes framework to provide lower bounds on the expected performance and safety of policies in unseen environments. We empirically study the proposed framework for ego-vision navigation in two types of indoor environments including a photo-realistic one. We also demonstrate strong generalization performance through hardware experiments in real indoor spaces with a quadrupedal robot (See for video of representative trials of Real deployment).
1 Reply