Vintix: Action Model via In-Context Reinforcement Learning

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: A cross-domain action model pre-trained directly for In-Context Reinforcement Learning
Abstract: In-Context Reinforcement Learning (ICRL) represents a promising paradigm for developing generalist agents that learn at inference time through trial-and-error interactions, analogous to how large language models adapt contextually, but with a focus on reward maximization. However, the scalability of ICRL beyond toy tasks and single-domain settings remains an open challenge. In this work, we present the first steps toward scaling ICRL by introducing a fixed, cross-domain model capable of learning behaviors through in-context reinforcement learning. Our results demonstrate that Algorithm Distillation, a framework designed to facilitate ICRL, offers a compelling and competitive alternative to expert distillation to construct versatile action models. These findings highlight the potential of ICRL as a scalable approach for generalist decision-making systems.
Lay Summary: What if AI agents could learn new tasks on the fly—just by interacting with their environment—without ever being retrained? In-Context Reinforcement Learning (ICRL) promises exactly that, but until now, it hasn’t scaled beyond toy examples or single-domain settings. Our work takes a leap forward: we introduce a single model that learns from inference-time trial and error across a wide variety of tasks and environments. No task-specific tuning. Just adaptation in real time. This is a glimpse into the future of generalist AI—agents that learn the way we do: by doing.
Link To Code: https://github.com/dunnolab/vintix
Primary Area: Reinforcement Learning->Batch/Offline
Keywords: in-context reinforcement learning
Submission Number: 11900
Loading