Keywords: Diffusion Models, Image Generation, Behavior Cloning, Visuomotor
TL;DR: Stable Diffusion can be fine-tuned to draw joint-actions for visuomotor control.
Abstract: Image-generation diffusion models have been fine-tuned to unlock new capabilities such as image-editing and novel view synthesis. Can we similarly unlock image-generation models for visuomotor control? We present GENIMA, a behavior-cloning agent that fine-tunes Stable Diffusion to “draw joint-actions” as targets on RGB images. These images are fed into a controller that maps the visual targets into a sequence of joint-positions. We study GENIMA on 25 RLBench and 9 real-world manipulation tasks. We find that, by lifting actions into image-space, internet pre-trained diffusion models can generate policies that outperform state- of-the-art visuomotor approaches, especially in robustness to scene perturbations and generalizing to novel objects. Our method is also competitive with 3D agents, despite lacking priors such as depth, keypoints, or motion-planners.
Spotlight Video: mp4
Video: https://youtu.be/V0xJ833dCcU
Website: https://genima-robot.github.io/
Code: https://github.com/MohitShridhar/genima
Publication Agreement: pdf
Student Paper: no
Supplementary Material: zip
Submission Number: 132
Loading