In-Context Compositional Q-Learning for Offline Reinforcement Learning

ICLR 2026 Conference Submission21465 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: In-context Learning, Reinforcement Learning
Abstract: Accurately estimating the Q-function is a central challenge in offline reinforcement learning. However, existing approaches often rely on a single global Q-function, which struggles to capture the compositional nature of tasks involving diverse subtasks. We propose In-context Compositional Q-Learning ($\texttt{ICQL}$), the first offline RL framework that formulates Q-learning as a contextual inference problem, using linear Transformers to adaptively infer local Q-functions from retrieved transitions without explicit subtask labels. Theoretically, we show that under two assumptions—linear approximability of the local Q-function and accurate weight inference from retrieved context—$\texttt{ICQL}$ achieves bounded Q-function approximation error, and supports near-optimal policy extraction. Empirically, $\texttt{ICQL}$ substantially improves performance in offline settings: improving performance in Kitchen tasks by up to 29. 46\%, and in Gym and Adroit tasks by up to 6\%. These results highlight the underexplored potential of in-context learning for robust and compositional value estimation, positioning $\texttt{ICQL}$ as a principled and effective framework for offline RL.
Primary Area: reinforcement learning
Submission Number: 21465
Loading