Keywords: Reinforcement Learning, Gradient Flow, Markov Decision Process, Entropy Regularization, Non-convex optimization, Mirror descent method, Fisher–Rao gradient flow, Global convergence, Function approximation, Actor Critic
Abstract: We prove the stability and global convergence of a coupled Actor-Critic gradient flow for infinite-horizon and entropy-regularised Markov decision processes (MDPs) in continuous state and action space with linear function approximation under Q-function realisability.
We consider a version of actor critic where the critic is updated using temporal difference (TD) learning while the policy is updated using a policy mirror descent method on a separate timescale. We demonstrate stability and exponential convergence of the actor critic flow to the optimal policy. Finally, we address the interplay of the timescale separation and entropy regularisation and its effect on stability and convergence.
Primary Area: reinforcement learning
Submission Number: 18267
Loading