Scaling Robot Policy Learning via Zero-Shot Labeling with Foundation Models

Published: 05 Sept 2024, Last Modified: 08 Nov 2024CoRL 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Foundation Models, Language-conditioned Imitation Learning, Data Labeling
TL;DR: A novel framework to label unucrated long-horizon robot demonstrations without any model training our human annotation for langauge-conditioned policy learning.
Abstract: A central challenge towards developing robots that can relate human language to their perception and actions is the scarcity of natural language annotations in diverse robot datasets. Moreover, robot policies that follow natural language instructions are typically trained on either templated language or expensive human-labeled instructions, hindering their scalability. To this end, we introduce NILS: Natural language Instruction Labeling for Scalability. NILS automatically labels uncurated, long-horizon robot data at scale in a zero-shot manner without any human intervention. NILS combines pre-trained vision-language foundation models in a sophisticated, carefully considered manner in order to detect objects in a scene, detect object-centric changes, segment tasks from large datasets of unlabelled interaction data and ultimately label behavior datasets. Evaluations on BridgeV2 and a kitchen play dataset show that NILS is able to autonomously annotate diverse robot demonstrations of unlabeled and unstructured datasets, while alleviating several shortcomings of crowdsourced human annotations.
Supplementary Material: zip
Spotlight Video: mp4
Website: http://robottasklabeling.github.io/
Publication Agreement: pdf
Student Paper: yes
Submission Number: 604
Loading