Keywords: offline RL, weight function, confidence intervals, policy optimization
TL;DR: Provably statistically efficient offline RL with arbitrary test functions
Abstract: We propose and analyze a reinforcement learning principle that
approximates the Bellman equations by enforcing their validity only
along a user-defined space of test functions. Focusing on
applications to model-free offline RL with function approximation, we
exploit this principle to derive confidence intervals for off-policy
evaluation, as well as to optimize over policies within a prescribed
policy class. We prove an oracle inequality on our policy
optimization procedure in terms of a trade-off between the value and
uncertainty of an arbitrary comparator policy. Different choices of
test function spaces allow us to tackle different problems within a
common framework. We characterize the loss of efficiency in moving
from on-policy to off-policy data using our procedures, and establish
connections to concentrability coefficients studied in past work. We
examine in depth the implementation of our methods with linear
function approximation, and provide theoretical guarantees with
polynomial-time implementations even when Bellman closure does not
hold.
Supplementary Material: pdf
12 Replies
Loading