Manipulate-Anything: Automating Real-World Robots using Vision-Language Models

Published: 26 Jun 2024, Last Modified: 09 Jul 2024DGR@RSS2024 PosterSpotlightEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Robot Learning; Multimodal Large Language Model; Data Generation; Imitation Learning; Behavior Cloning
TL;DR: A scalable automated demonstration generation method leveraging on Vision-Language Models for real-world robotics manipulation
Abstract: Large-scale endeavors like RT-1 and widespread community efforts such as Open-X-Embodiment have contributed to growing the scale of robot demonstration data. However, there is still an opportunity to improve the quality, quantity, and diversity of robot demonstration data. Although vision-language models have been shown to automatically generate demonstration data, their utility has been limited to environments with privileged state information, they require hand-designed skills, and are limited to interactions with few object instances. We propose Manipulate-Anything, a scalable automated generation method for real-world robotic manipulation. Unlike prior work, our method can operate in real-world environments without any privileged state information, hand-designed skills, and can manipulate any static object. We evaluate our method using two setups. First, Manipulate-Anything successfully generates trajectories for all 5 real-world and 12 simulation tasks, significantly outperforming existing methods like VoxPoser. Second, Manipulate-Anything's demonstrations can train more robust behavior cloning policies than training with human demonstrations, or from data generated by VoxPoser and Code-As-Policies. We believe Manipulate-Anything can be the scalable method for both generating data for robotics and solving novel tasks in a zero-shot setting. Anonymous project page: manipulate-anything.github.io.
Submission Number: 18
Loading