Constrained Policy Optimization via Bayesian World ModelsDownload PDF

Published: 28 Jan 2022, Last Modified: 22 Oct 2023ICLR 2022 SpotlightReaders: Everyone
Keywords: Reinforcement learning, Constrained Markov decision processes, Constrained policy optimization, Bayesian model-based RL
Abstract: Improving sample-efficiency and safety are crucial challenges when deploying reinforcement learning in high-stakes real world applications. We propose LAMBDA, a novel model-based approach for policy optimization in safety critical tasks modeled via constrained Markov decision processes. Our approach utilizes Bayesian world models, and harnesses the resulting uncertainty to maximize optimistic upper bounds on the task objective, as well as pessimistic upper bounds on the safety constraints. We demonstrate LAMBDA's state of the art performance on the Safety-Gym benchmark suite in terms of sample efficiency and constraint violation.
One-sentence Summary: Solving constrained Markov decision processes with Bayesian model-based reinforcement learning.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2201.09802/code)
10 Replies

Loading