Keywords: In-Context Imitation Learning, Diffusion Models, Graph Neural Networks
TL;DR: We formulate In-Context Imitation Learning as a diffusion-based graph generation problem and learn it using procedurally generated pseudo-demonstrations.
Abstract: Following the impressive capabilities of in-context learning with large transformers, In-Context Imitation Learning (ICIL) is a promising opportunity for robotics. We introduce Instant Policy, which learns new tasks instantly from just one or two demonstrations, achieving ICIL through two key components. First, we introduce inductive biases through a graph representation and model ICIL as a graph generation problem using a learned diffusion process, enabling structured reasoning over demonstrations, observations, and actions. Second, we show that such a model can be trained using pseudo-demonstrations – arbitrary trajectories generated in simulation – as a virtually infinite pool of training data. Our experiments show that Instant Policy enables rapid learning of everyday robot tasks. We also show how it can serve as a foundation for cross-embodiment and zero-shot transfer to language-defined tasks.
Previous Publication: No
Submission Number: 38
Loading