Q-Learning with Adjoint Matching

ICLR 2026 Conference Submission13888 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reinforcement learning, flow-matching
Abstract: We propose Q-learning with Adjoint Matching (QAM), a novel TD-based reinforcement learning (RL) algorithm that tackles a long-standing challenge in continuous-action RL: efficient optimization of an expressive diffusion/flow-matching based policy with respect to a parameterized value function (i.e., the critic $Q_\phi(s, a)$). Effective optimization requires exploiting the first-order information of the critic (i.e., the action gradient, $\nabla_a Q_\phi(s, a)$), but it is especially challenging to do so for flow/diffusion policy because direct gradient-based optimization via backpropagation through their multi-step denoising process is unstable. Existing methods work around either by only using the value and discarding the gradient information, or by relying on approximations that sacrifice policy expressivity or bias the learned policy. QAM sidesteps both of these challenges by leveraging adjoint matching, a recently proposed technique in generative modeling, which transforms the critic's action gradient to form a step-wise objective function that is free from unstable backpropagation, while providing an unbiased, expressive policy at the optimum. Combined with temporal-difference (TD) backup for critic learning, QAM consistently outperforms prior approaches across challenging, sparse reward tasks in both offline and offline-to-online RL settings.
Supplementary Material: zip
Primary Area: reinforcement learning
Submission Number: 13888
Loading