DDLP: Unsupervised Object-centric Video Prediction with Deep Dynamic Latent Particles

Published: 08 Feb 2024, Last Modified: 08 Feb 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: We propose a new object-centric video prediction algorithm based on the deep latent particle (DLP) representation of Daniel and Tamar (2022). In comparison to existing slot- or patch-based representations, DLPs model the scene using a set of keypoints with learned parameters for properties such as position and size, and are both efficient and interpretable. Our method, \textit{deep dynamic latent particles} (DDLP), yields state-of-the-art object-centric video prediction results on several challenging datasets. The interpretable nature of DDLP allows us to perform ``what-if'' generation -- predict the consequence of changing properties of objects in the initial frames, and DLP's compact structure enables efficient diffusion-based unconditional video generation. Videos, code and pre-trained models are available: https://taldatech.github.io/ddlp-web
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: 1. Moved all metrics to main text. 2. More detailes on DLPv2 in main text. 3. Added optimization/training details in main text 4. More detailed caption in Figure 9 (previously 8). 5. Figure 22 of the relative positional bias moved to the main text (Figure 6). 6. What if - clarify that in practice, we apply the same modification for the first “burn-in frames” in the main text.
Video: https://www.youtube.com/watch?v=3S2pKhi_ewY
Code: https://github.com/taldatech/ddlp
Assigned Action Editor: ~Yingnian_Wu1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1696
Loading