Offline Actor-Critic for Average Reward MDPs

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reinforcement Learning, Offline, Actor-Critic
Abstract: We study offline policy optimization for infinite-horizon average-reward Markov decision processes (MDPs) with large or infinite state spaces. Specifically, we propose a pessimistic actor-critic algorithm that uses a computationally efficient linear function class for value function estimation. At the core of our method is a critic that computes a pessimistic estimate of the average reward under the current policy, as well as the corresponding policy gradient, by solving a fixed-point Bellman equation, rather than solving a successive sequence of regression problems as in finite horizon settings. This procedure reduces to solving a second-order cone program, which is computationally tractable. Our theoretical analysis is based on a weak partial data coverage assumption, which requires only that the offline data aligns well with the expected feature vector of a comparator policy. Under this condition, we show that our algorithm achieves the optimal sample complexity of O(\varepsilon^{-2}) for learning a near-optimal policy, up to model misspecification errors.
Primary Area: Reinforcement learning (e.g., decision and control, planning, hierarchical RL, robotics)
Submission Number: 26773
Loading