In-Context Adaptation for Generalizable Imitation Learning

Published: 16 Sept 2025, Last Modified: 25 Sept 2025CoRL 2025 SpotlightEveryoneRevisionsBibTeXCC BY 4.0
Keywords: In-Context Adaptation, Imitation Learning, Zero-Shot Generalization
TL;DR: Imitation learning policy conditioned on its interaction history can adapt in-context to unseen action dynamics if trained on a sufficiently diverse range of dynamics.
Abstract: While imitation learning on large-scale robot data produces robot policies with impressive task performance, these policies are typically reactive and lack the ability to adapt to novel conditions at test time. This limitation stands in stark contrast to Large Language Models (LLMs), which excel at in-context learning and adaptation. In this work, we take the first steps toward bridging this gap, exploring how imitation learning can instill in-context adaptation into robot policies. We specifically address the challenge of varying action dynamics, a scenario requiring online inference and adjustment. Our experiments with Diffusion Policy reveal that enabling such adaptation hinges on two critical components: conditioning the policy on histories of both observations and actions, and training on a diverse sampling of action dynamics. The resulting method successfully generalizes to unseen, out-of-distribution dynamics in context, representing a key advancement toward behavioral generalization in imitation learning.
Submission Number: 10
Loading