ReMix: Optimizing Data Mixtures for Large Scale Imitation Learning

Published: 05 Sept 2024, Last Modified: 14 Oct 2024CoRL 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Data Curation, Data Quality, Robot Imitation Learning
TL;DR: We use techniques from robust optimization to learn data mixture weights for Bridge and RT-X datasets, and show they improve downstream performance.
Abstract: Increasingly large robotics datasets are being collected to train larger foundation models in robotics. However, despite the fact that data selection has been of utmost importance to scaling in vision and natural language processing (NLP), little work in robotics has questioned what data such models should actually be trained on. In this work we investigate how to weigh different subsets or ``domains'' of robotics datasets during pre-training to maximize worst-case performance across all possible downstream domains using distributionally robust optimization (DRO). Unlike in NLP, we find that these methods are hard to apply out of the box due to varying action spaces and dynamics across robots. Our method, ReMix, employs early stopping and action normalization and discretization to counteract these issues. Through extensive experimentation on both the Bridge and OpenX datasets, we demonstrate that data curation can have an outsized impact on downstream performance. Specifically, domain weights learned by ReMix outperform uniform weights by over 40\% on average and human-selected weights by over 20\% on datasets used to train the RT-X models.
Supplementary Material: zip
Code: https://github.com/jhejna/remix
Publication Agreement: pdf
Student Paper: yes
Submission Number: 634
Loading