Yes, Q-learning Helps Offline In-Context RL

Published: 08 Mar 2025, Last Modified: 08 Apr 2025SSI-FM PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: reinforcement learning, in-context reinforcement learning, offline reinforcement learning
Abstract: Existing scalable offline In-Context Reinforcement Learning (ICRL) methods have predominantly relied on supervised training objectives, which are known for having limitations in offline RL settings. In this work, we investigate the integration of reinforcement learning (RL) objectives into a scalable offline ICRL framework. Through experiments across more than 150 datasets derived from GridWorld and MuJoCo environments, we demonstrate that optimizing RL objectives improves performance by approximately 30% on average compared to the widely established Algorithm Distillation (AD) baseline across various dataset coverages, structures, expertise levels, and environmental complexities. Our results also reveal that offline RL-based methods, outperform online approahces, which are not specifically designed for offline scenarios. These findings underscore the importance of aligning the learning objectives with RL’s reward-maximization goal and demonstrates that offline RL is a promising direction for applying in ICRL settings.
Submission Number: 53
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview