Keywords: Safe Exploration, Reinforcement Learning, Constrained Markov Decision Processes
TL;DR: We propose a method that can solve constrained Markov decision processes while ensuring safety during learning
Abstract: A major challenge in deploying reinforcement learning in online tasks is ensuring that safety is maintained _throughout_ the learning process. In this work, we propose CERL, a new method for solving constrained Markov decision processes while keeping the policy safe during learning. Our method leverages Bayesian world models and suggests policies that are pessimistic wrt the model's epistemic uncertainty. This makes CERL robust towards model inaccuracies and leads to safe exploration during learning. In our experiments, we demonstrate that CERL outperforms the current state-of-the-art in terms of safety and optimality in solving CMDPs from image observations.
Submission Number: 6
Loading