Keywords: RL theory
Abstract: Offline reinforcement learning (RL) aims at learning an optimal strategy using a pre-collected dataset without further interactions with the environment. While various algorithms have been proposed for offline RL in the previous literature, the minimax optimality has only been (nearly) established for tabular Markov decision processes (MDPs). In this paper, we focus on offline RL with linear function approximation and propose two new algorithms, LinPEVI-ADV+ and LinPMVI-ADV+, for single-agent MDPs and two-player zero-sum Markov games (MGs), respectively. The proposed algorithms establish pessimism in a variance-reduction manner via reference-advantage decomposition and variance-reweighted ridge regression. Theoretical analysis demonstrates that they can match the performance lower bounds up to logarithmic factors. We also establish new performance lower bounds for MDPs and MGs, which tighten the existing results, to demonstrate the nearly minimax optimality of the proposed algorithms. As a byproduct, equipped with the techniques developed in this paper, we can further improve the suboptimality bound when the feature vector set is finite. To the best of our knowledge, these are the first computationally efficient and nearly minimax optimal algorithms for offline single-agent MDPs and MGs with linear function approximation.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Reinforcement Learning (eg, decision and control, planning, hierarchical RL, robotics)