TextOCVP: Object-Centric Video Prediction with Language Guidance

Published: 04 Feb 2026, Last Modified: 04 Feb 2026Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Understanding and forecasting future scene states is critical for autonomous agents to plan and act effectively in complex environments. Object-centric models, with structured latent spaces, have shown promise in modeling object dynamics and interactions in order to predict future scene states, but often struggle to scale beyond simple synthetic datasets and to integrate external guidance, limiting their applicability in robotic environments. To address these limitations, we propose TextOCVP, an object-centric model for video prediction guided by textual descriptions. TextOCVP parses an observed scene into object representations, called slots, and utilizes a text-conditioned transformer predictor to forecast future object states and video frames. Our approach jointly models object dynamics and interactions while incorporating textual guidance, enabling accurate and controllable predictions. TextOCVP’s structured latent space offers a more precise control of the forecasting process, outperforming several video prediction baselines on two datasets. Additionally, we show that structured object-centric representations provide superior robustness to novel scene configurations, as well as improved controllability and interpretability, enabling more precise and understandable predictions.
Submission Type: Regular submission (no more than 12 pages of main content)
Video: https://ftiedual-my.sharepoint.com/personal/gjergj_plepi_fti_edu_al/_layouts/15/stream.aspx?id=%2Fpersonal%2Fgjergj%5Fplepi%5Ffti%5Fedu%5Fal%2FDocuments%2FTMLR%5Fvideo%5Fpresentation%2Emp4&ga=1&referrer=StreamWebApp%2EWeb&referrerScenario=AddressBarCopied%2Eview%2Ebe17ff9a%2D53ad%2D4118%2D8ef2%2D01a8cacf36e4
Code: https://github.com/angelvillar96/TextOCVP
Assigned Action Editor: ~Long_Chen8
Submission Number: 6356
Loading