Keywords: Robot Representation, Visual Foundation Model
TL;DR: We introduce a differentiable robot rendering method based on deformable Gaussians splattings and show many downstream applications. Abstract:
Abstract: Vision foundation models trained on massive amounts of visual data have shown unprecedented reasoning and planning skills in open-world settings. A key challenge in applying them to robotic tasks is the modality gap between visual data and action data. We introduce differentiable robot rendering, a method allowing the visual appearance of a robot body to be directly differentiable with respect to its control parameters. Our model integrates a kinematics-aware deformable model and Gaussians Splatting and is compatible with any robot form factors and degrees of freedom. We demonstrate its capability and usage in applications including reconstruction of robot poses from images and controlling robots through vision language models. Quantitative and qualitative results show that our differentiable rendering model provides effective gradients for robotic control directly from pixels, setting the foundation for the future applications of vision foundation models in robotics.
Supplementary Material: zip
Video: https://drrobot.cs.columbia.edu/assets/videos/video.mp4
Website: https://drrobot.cs.columbia.edu/
Code: https://github.com/cvlab-columbia/drrobot
Publication Agreement: pdf
Student Paper: yes
Submission Number: 72
Loading