Task Robustness via Re-Labelling Vision-Action Robot Data

Published: 06 Sept 2025, Last Modified: 26 Sept 2025CoRL 2025 Robot Data WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: behavior cloning, language-conditioned behavior cloning, VLMs, data augmentation
Abstract: The recent trend in scaling models for robot learning has resulted in impressive policies that can perform various manipulation tasks and generalize to novel scenarios. However, these policies continue to struggle with following instructions, likely due to the limited linguistic and action sequence diversity in existing robotics datasets. This paper introduces $\textbf{T}$ask $\textbf{R}$obustness via R$\textbf{E}$-Labelling Vision-$\textbf{A}$ction Robot $\textbf{D}$ata (TREAD), a scalable framework that leverages large Vision-Language Models (VLMs) to augment existing robotics datasets without additional data collection, harnessing the transferable knowledge embedded in these models. Our approach operates in two stages: first, we use VLMs to generate diverse, grounded semantic sub-tasks from original instruction labels; second, we process demonstration videos to identify which segments correspond to each sub-task, effectively decomposing longer demonstrations into meaningful language-action pairs. We further enhance robustness by augmenting the data with linguistically diverse versions of the text goals. Evaluations on LIBERO demonstrate that policies trained on our augmented datasets exhibit improved performance on novel, unseen tasks and goals. Our results show that TREAD enhances both planning generalization through trajectory decomposition and language-conditioned policy generalization through increased linguistic diversity.
Lightning Talk Video: mp4
Submission Number: 17
Loading