Dreamitate: Real-World Visuomotor Policy Learning via Video Generation

Published: 05 Sept 2024, Last Modified: 21 Oct 2024CoRL 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Imitation Learning, Visuomotor Policy, Video Generation
TL;DR: We introduce a visuomotor policy based on conditional video generation and 3D tracking which is much more generalizable in manipulation tasks than traditional behavior cloning methods.
Abstract: A key challenge in manipulation is learning a policy that can robustly generalize to diverse visual environments. A promising mechanism for learning robust policies is to leverage video generative models, which are pretrained on large-scale datasets of internet videos. In this paper, we propose a visuomotor policy learning framework that fine-tunes a video diffusion model on human demonstrations of a given task. At test time, we generate an example of an execution of the task conditioned on images of a novel scene, and use this synthesized execution directly to control the robot. Our key insight is that using common tools allows us to effortlessly bridge the embodiment gap between the human hand and the robot manipulator. We evaluate our approach on 4 tasks of increasing complexity and demonstrate that capitalizing on internet-scale generative models allows the learned policy to achieve a significantly higher degree of generalization than existing behavior cloning approaches.
Supplementary Material: zip
Spotlight Video: mp4
Video: https://dreamitate.cs.columbia.edu/
Website: https://dreamitate.cs.columbia.edu/
Code: https://dreamitate.cs.columbia.edu/
Publication Agreement: pdf
Student Paper: yes
Submission Number: 46
Loading