Modeling the Real World with High-Density Visual Particle Dynamics

Published: 05 Sept 2024, Last Modified: 17 Oct 2024CoRL 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: point clouds, particle dynamics, world models for control, Performers
TL;DR: Scalable robotics world models with learned particle dynamics trained from RGB-D
Abstract: We present High-Density Visual Particle Dynamics (HD-VPD), a learned world model that can emulate the physical dynamics of real scenes by processing massive latent point clouds containing 100K+ particles. To enable efficiency at this scale, we introduce a novel family of Point Cloud Transformers (PCTs) called Interlacers leveraging intertwined linear-attention Performer layers and graph-based neighbour attention layers. We demonstrate the capabilities of HD-VPD by modeling the dynamics of high degree-of-freedom bi-manual robots with two RGB-D cameras. Compared to the previous graph neural network approach, our Interlacer dynamics is twice as fast with the same prediction quality, and can achieve higher quality using 4x as many particles. We illustrate how HD-VPD can evaluate motion plan quality with robotic box pushing and can grasping tasks. See videos and particle dynamics rendered by HD-VPD at https://sites.google.com/view/hd-vpd.
Supplementary Material: zip
Spotlight Video: mp4
Website: https://sites.google.com/view/hd-vpd
Publication Agreement: pdf
Student Paper: no
Submission Number: 309
Loading