Few-Shot In-Context Imitation Learning via Implicit Graph AlignmentDownload PDF

Published: 30 Aug 2023, Last Modified: 16 Oct 2023CoRL 2023 PosterReaders: Everyone
Keywords: Few-Shot Imitation Learning, Graph Neural Networks
TL;DR: We learn how to align graph representations of objects and use it as a foundation of a few-shot in-context imitation learning framework.
Abstract: Consider the following problem: given a few demonstrations of a task across a few different objects, how can a robot learn to perform that same task on new, previously unseen objects? This is challenging because the large variety of objects within a class makes it difficult to infer the task-relevant relationship between the new objects and the objects in the demonstrations. We address this by formulating imitation learning as a conditional alignment problem between graph representations of objects. Consequently, we show that this conditioning allows for in-context learning, where a robot can perform a task on a set of new objects immediately after the demonstrations, without any prior knowledge about the object class or any further training. In our experiments, we explore and validate our design choices, and we show that our method is highly effective for few-shot learning of several real-world, everyday tasks, whilst outperforming baselines. Videos are available on our project webpage at https://www.robot-learning.uk/implicit-graph-alignment.
Student First Author: yes
Supplementary Material: zip
Instructions: I have read the instructions for authors (https://corl2023.org/instructions-for-authors/)
Website: https://www.robot-learning.uk/implicit-graph-alignment
Publication Agreement: pdf
Poster Spotlight Video: mp4
9 Replies

Loading