Keywords: foundation models, grasping, datasets
TL;DR: large number of robots that explore novel environments, propose tasks using LLMs, and collect those tasks, run in the real world.
Abstract: Foundation models that incorporate language, vision, and more recently actions have revolutionized the ability to
harness internet scale data to reason about useful tasks. However,
one of the key challenges of training embodied foundation models
is the lack of data grounded in the physical world. In this
paper, we propose AutoRT, a system that leverages existing
foundation models to scale up the deployment of operational
robots in completely unseen scenarios with minimal human
supervision. AutoRT leverages vision-language models (VLMs)
and large language models (LLMs) for scene understanding,
novel instruction proposal, and guided data collection. Tapping
into the knowledge of foundation models enables AutoRT to
effectively reason about autonomy tradeoffs and safety while
significantly scaling up data collection for robot learning. We
demonstrate AutoRT proposing instructions to over 20 robots
across multiple buildings and collecting 77k real robot episodes
via both teleoperation and autonomous robot policies. We experimentally show that such “in-the-wild” data collected by AutoRT
is significantly more diverse, and that AutoRT’s use of LLMs
allows for instruction following data collection robots that can
align to human preferences. See more at https://auto-rt.github.io/
Supplementary Material: zip
Submission Number: 37
Loading