Keywords: 3D State Representations, Gaussian Splatting, Deformable Objects, Vision-based Tracking
TL;DR: We present Cloth-Splatting, a method that integrates a pre-trained action-conditioned dynamics model with Gaussian Splatting for 3D state estimation of cloth-like deformable objects from RGB observations.
Abstract: We introduce Cloth-Splatting, a method for estimating 3D states of cloth from RGB images through a prediction-update framework. Cloth-Splatting leverages an action-conditioned dynamics model for predicting future states and uses 3D Gaussian Splatting to update the predicted states. Our key insight is that coupling a 3D mesh-based representation with Gaussian Splatting allows us to define a differentiable map between the cloth's state space and the image space. This enables the use of gradient-based optimization techniques to refine inaccurate state estimates using only RGB supervision. Our experiments demonstrate that Cloth-Splatting not only improves state estimation accuracy over current baselines but also reduces convergence time by $\sim 85$ \%.
Supplementary Material: zip
Website: https://kth-rpl.github.io/cloth-splatting/
Publication Agreement: pdf
Student Paper: no
Spotlight Video: mp4
Submission Number: 429
Loading