Keywords: Reinforcement Learning, Actor-Critic Methods, Constrained Markov Decision Processes, Policy Gradient, Global Convergence
TL;DR: Global Convergence with Order-Optimal rate for Average Reward Constrained MDPs with Primal-Dual Natural Actor Critic Algorithm
Abstract: This paper investigates infinite-horizon average reward Constrained Markov Decision Processes (CMDPs) under general parametrized policies with smooth and bounded policy gradients. We propose a Primal-Dual Natural Actor-Critic algorithm that adeptly manages constraints while ensuring a high convergence rate. In particular, our algorithm achieves global convergence and constraint violation rates of $\tilde{\mathcal{O}}(1/\sqrt{T})$ over a horizon of length $T$ when the mixing time, $\tau_{\mathrm{mix}}$, is known to the learner. In absence of knowledge of $\tau_{\mathrm{mix}}$, the achievable rates change to $\tilde{\mathcal{O}}(1/T^{0.5-\epsilon})$ provided that $T \geq \tilde{\mathcal{O}}\left(\tau_{\mathrm{mix}}^{2/\epsilon}\right)$. Our results match the theoretical lower bound for Markov Decision Processes and establish a new benchmark in the theoretical exploration of average reward CMDPs.
Primary Area: Theory (e.g., control theory, learning theory, algorithmic game theory)
Submission Number: 16704
Loading