Safe Posterior Sampling for Constrained MDPs with Bounded Constraint Violation

Published: 26 Oct 2023, Last Modified: 13 Dec 2023NeurIPS 2023 Workshop PosterEveryoneRevisionsBibTeX
Keywords: Optimization, Online Learning, Reinforcement learning, Posterior Sampling, Safe learning
TL;DR: A safe posterior sampling-based reinforcement learning algorithm for CMDPs with bounded constraint violations guarantee
Abstract: The model in constrained Markov decision processes (CMDPs) is often unknown and must be learned online while still ensuring the constraint is met, or at least the violation is bounded with time. Some recent papers have made progress on this very challenging problem but either need unsatisfactory assumptions such as the knowledge of a safe policy, or have high cumulative regret. We propose the Safe PSRL algorithm that does not need such assumptions and yet performs very well, both in terms of theoretical regret bounds as well as empirically. The algorithm achieves an efficient tradeoff between exploration and exploitation by use of the posterior sampling principle, and provably suffers only bounded constraint violation by leveraging the idea of pessimism. Our algorithm is based on a primal-dual approach. We establish a sub-linear $\tilde{\mathcal{ O}}\left(H^{2.5} \sqrt{|\mathcal{S}|^2 |\mathcal{A}| K} \right)$ upper bound on the Bayesian reward objective regret along with a bounded, i.e., $\tilde{\mathcal{O}}\left(1\right)$ constraint violation regret over $K$ episodes for an $|\mathcal{S}|$-state, $|\mathcal{A}|$-action, and horizon $H$ CMDP.
Submission Number: 55
Loading