Keywords: Actor-critic, batch RL, offline RL, natural policy gradient, mirror descent, pessimism, policy learning, linear value functions
TL;DR: Actor-critic methods can work in models that are more general than low-rank MDPs with minimax statistical efficiency and computational tractability.
Abstract: Actor-critic methods are widely used in offline reinforcement learning
practice, but are not so well-understood theoretically. We propose a new
offline actor-critic algorithm that naturally incorporates the pessimism principle, leading to several key advantages compared to the state of the art.
The algorithm can operate when the Bellman evaluation operator is closed with respect to the action value function of the actor's policies; this is a more general setting than the low-rank MDP model. Despite the added generality, the procedure is computationally tractable as it involves the solution of a sequence of second-order programs.
We prove an upper bound on the suboptimality gap of the policy returned by the procedure that depends on the data coverage of any arbitrary, possibly data dependent comparator policy.
The achievable guarantee is complemented with a minimax lower bound that is matching up to logarithmic factors.
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Supplementary Material: pdf
10 Replies
Loading