A safe exploration approach to constrained Markov decision processes

Published: 17 Jun 2024, Last Modified: 22 Jul 2024FoRLaC PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: We consider discounted infinite horizon constrained Markov decision processes (CMDPs) where the goal is to find an optimal policy that maximizes the expected cumulative reward subject to expected cumulative constraints. Motivated by the application of CMDPs in online learning of safety-critical systems, we focus on developing a model-free and $\textit{simulator-free}$ algorithm that ensures $\textit{constraint satisfaction during learning}$. To this end, we develop an interior point approach based on the log barrier function of the CMDP. Under the commonly assumed conditions of Fisher non-degeneracy and bounded transfer error of the policy parameterization, we establish the theoretical properties of the algorithm. In particular, in contrast to existing CMDP approaches that ensure policy feasibility only upon convergence, our algorithm guarantees the feasibility of the policies during the learning process and converges to the $\varepsilon$-optimal policy with a sample complexity of $\tilde{\mathcal{O}}(\varepsilon^{-6})$. In comparison to the state-of-the-art policy gradient-based algorithm, C-NPG-PDA, our algorithm requires an additional $\mathcal{O}(\varepsilon^{-2})$ samples to ensure policy feasibility during learning with same Fisher non-degenerate parameterization.
Format: Long format (up to 8 pages + refs, appendix)
Publication Status: No
Submission Number: 14
Loading