Offline Reinforcement Learning with Additional Covering Distributions

Published: 09 Nov 2023, Last Modified: 17 Sept 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: We study learning optimal policies from a logged dataset, i.e., offline RL, with function general approximation. Despite the efforts devoted, existing algorithms with theoretic finite-sample guarantees typically assume exploratory data coverage or strong realizable function classes (e.g., Bellman-completeness), which is hard to be satisfied in reality. While there are recent works that successfully tackle these strong assumptions, they either require the gap assumptions that could only be satisfied by part of MDPs or use the behavior regularization that makes the optimality of learned policy even intractable. To solve this challenge, we provide finite-sample guarantees for a simple algorithm based on marginalized importance sampling (MIS), showing that sample-efficient offline RL for general MDPs is possible with only a partial coverage dataset (instead of assuming a dataset covering all possible policies) and weak realizable function classes (assuming function classes containing simply one function) given additional side information of a covering distribution. We demonstrate that the covering distribution trades off prior knowledge of the optimal trajectories against the coverage requirement of the dataset, revealing the effect of this inductive bias in the learning processes. Furthermore, when considering the exploratory dataset, our analysis shows that only realizable function classes are enough for learning near-optimal policies, even with no side information on the additional coverage distributions.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Chicheng_Zhang1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1320
Loading