Implicit Constraint‑Aware Off‑Policy Correction for Offline Reinforcement Learning

Published: 01 Jun 2025, Last Modified: 23 Jun 2025OOD Workshop @ RSS2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Offline reinforcement learning, constraint-aware learning, monotonicity constraints, implicit differentiation
TL;DR: A framework that embeds structural constraints inside Bellman updates for offline RL, ensuring value functions maintain domain knowledge while outperforming existing methods.
Abstract: Offline reinforcement learning promises policy improvement from logged interaction data alone, yet state‑of‑the‑art algorithms remain vulnerable to value over‑estimation and to violations of domain knowledge such as monotonicity or smoothness. We introduce implicit constraint‑aware off‑policy correction, a framework that embeds structural priors directly inside every Bellman update. The key idea is to compose the optimal Bellman operator with a proximal projection on a convex constraint set, which produces a new operator that (i) remains a $\gamma$‑contraction, (ii) possesses a unique fixed point, and (iii) enforces the prescribed structure exactly. A differentiable optimization layer solves the projection; implicit differentiation supplies gradients for deep function approximators at a cost comparable to implicit Q-learning. On a synthetic Bid–Click auction—where the true value is provably monotone in the bid—our method eliminates all monotonicity violations and outperforms conservative Q‑learning and implicit Q‑learning in return, regret, and sample efficiency.
Submission Number: 6
Loading