Structured Difference-of-Q via Orthogonal Learning

Published: 03 Feb 2026, Last Modified: 03 Feb 2026AISTATS 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Offline reinforcement learning is important in many settings with available observational data but the inability to deploy new policies online due to safety, cost, and other concerns. Many recent advances in causal inference and machine learning target estimation of ``causal contrast" functions such as CATE, which can adapt to potentially smoother structure. We develop a dynamic generalization of the R-learner (Nie et al 2021, Lewis and Syrgkanis 2021) for estimating and optimizing the difference of $Q^\pi$-functions, $Q^\pi(s,a)-Q^\pi(s,a_0)$, for potential discrete-valued actions $a,a_0$, which can be used to optimize multiple-valued actions without loss of generality. We leverage orthogonal estimation to improve convergence rates, even if $Q$ and behavior policy (so-called nuisance functions) converge at slower rates and prove consistency of policy optimization under a margin condition. The method can leverage black-box estimators of the $Q$-function and behavior policy to target estimation of a more structured $Q$-function contrast, and comprises of simple squared-loss minimization.
Submission Number: 631
Loading