Convergence of an actor-critic gradient flow for entropy regularised MDPs in general spaces

Published: 26 Jan 2026, Last Modified: 26 Feb 2026ICLR 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reinforcement Learning, Gradient Flow, Markov Decision Process, Entropy Regularization, Non-convex optimization, Mirror descent method, Fisher–Rao gradient flow, Global convergence, Function approximation, Actor Critic
Abstract: We prove the stability and global convergence of a coupled actor-critic gradient flow for infinite-horizon and entropy-regularised Markov decision processes (MDPs) in continuous state and action space with linear function approximation under Q-function realisability. We consider a version of the actor critic gradient flow where the critic is updated using temporal difference (TD) learning while the policy is updated using a policy mirror descent method on a separate timescale. For general action spaces, the relative entropy regularizer is unbounded and thus it is not clear a priori that the actor-critc flow does not suffer from finite-time blow-up. Therefore we first demonstrate stability which in turn enables us obtain a convergence rate of the actor critic flow to the optimal regularised value function. The arguments presented show that timescale separation is crucial for stability and convergence in this setting.
Primary Area: reinforcement learning
Submission Number: 18267
Loading