Structured Difference-of-Q via Orthogonal Learning

Published: 28 Nov 2025, Last Modified: 30 Nov 2025NeurIPS 2025 Workshop MLxOREveryoneRevisionsBibTeXCC BY 4.0
Keywords: causal reinforcement learning, offline reinforecement learning, ad
Abstract: Offline reinforcement learning is important in many settings with available observational data but the inability to deploy new policies online due to safety, cost, and other concerns. Many recent advances in causal inference and machine learning target estimation of ``causal contrast" functions such as CATE, which is sufficient for optimizing decisions and can adapt to potentially smoother structure. We develop a dynamic generalization of the R-learner \citep{nie2021learning,lewis2021double} for estimating and optimizing the difference of $Q^\pi$-functions, $Q^\pi(s,a)-Q^\pi(s,a_0)$, for potential discrete-valued actions $a,a_0$, which can be used to optimize multiple-valued actions without loss of generality. We leverage orthogonal estimation to improve convergence rates, even if $Q$ and behavior policy (so-called nuisance functions) converge at slower rates and prove consistency of policy optimization under a margin condition. The method can leverage black-box estimators of the $Q$-function and behavior policy to target estimation of a more structured $Q$-function contrast, and comprises of simple squared-loss minimization.
Submission Number: 129
Loading