A Practical Approach for Safe Exploration

Published: 01 Jun 2024, Last Modified: 07 Aug 2024Deployable RL @ RLC 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Safe Exploration, Reinforcement Learning, Constrained Markov Decision Processes
TL;DR: We propose a method that can solve constrained Markov decision processes while ensuring safety during learning
Abstract: A major challenge in deploying reinforcement learning in online tasks is ensuring that safety is maintained _throughout_ the learning process. In this work, we propose CERL, a new method for solving constrained Markov decision processes while keeping the policy safe during learning. Our method leverages Bayesian world models and suggests policies that are pessimistic wrt the model's epistemic uncertainty. This makes CERL robust towards model inaccuracies and leads to safe exploration during learning. In our experiments, we demonstrate that CERL outperforms the current state-of-the-art in terms of safety and optimality in solving CMDPs from image observations.
Submission Number: 6
Loading