Keywords: Reinforcement Learning, Offline Reinforcement Learning, Linear MDPs
Abstract: Offline Reinforcement Learning (RL) aims to learn a near-optimal
policy from a fixed dataset of transitions collected by another policy.
This problem has attracted a lot of attention recently, but most existing
methods with strong theoretical guarantees are restricted to finite-horizon
or tabular settings. In constrast, few algorithms for
infinite-horizon settings with function approximation and minimal assumptions
on the dataset are both sample and computationally efficient.
Another gap in the current literature is the lack of theoretical analysis for
the average-reward setting, which is more challenging than the discounted setting.
In this paper, we address both of these issues by proposing a primal-dual
optimization method based on the linear programming formulation of RL.
Our key contribution is a new reparametrization that allows us to derive low-variance gradient estimators that can be used in a stochastic optimization scheme using only samples from the behavior policy.
Our method finds an $\varepsilon$-optimal policy with
$O(\varepsilon^{-4})$ samples, improving on the previous $O(\varepsilon^{-5})$,
while being computationally efficient for
infinite-horizon discounted and average-reward MDPs with realizable linear
function approximation and partial coverage. Moreover, to the best of our
knowledge, this is the first theoretical result for average-reward offline RL.
Supplementary Material: pdf
Submission Number: 12613
Loading