Keywords: Deep Reinforcement Learning, Safe Exploration, Safe RL, Constrained Markov Decision Processes
TL;DR: We propose a safe and scalable reinforcement learning algorithm that leverages policy priors with probabilistic dynamics models to guarantee safety and convergence to optimal performance.
Abstract: Safe exploration is a key requirement for reinforcement learning agents to learn and adapt online, beyond controlled (e.g. simulated) environments. In this work, we tackle this challenge by utilizing suboptimal yet conservative policies (e.g., obtained from offline data or simulators) as priors. Our approach, SOOPER, uses probabilistic dynamics models to optimistically explore, yet pessimistically fall back to the conservative policy prior if needed. We prove that SOOPER guarantees safety throughout learning, and establish convergence to an optimal policy by bounding its cumulative regret. Extensive experiments on key safe RL benchmarks and real-world hardware demonstrate that SOOPER is scalable, outperforms the state-of-the-art and validate our theoretical guarantees in practice.
Primary Area: reinforcement learning
Submission Number: 13546
Loading