Abstract: This work demonstrates that training autoregressive video diffusion models from a single video stream—resembling the experience of embodied agents—is not only possible, but can also be as effective as standard offline training given the same number of gradient steps. Our work further reveals that this main result can be achieved using experience replay methods that only retain a subset of the preceding video stream. To support training and evaluation in this setting, we introduce four new datasets for streaming lifelong generative video modeling: Lifelong Bouncing Balls, Lifelong 3D Maze, Lifelong Drive, and Lifelong PLAICraft, each consisting of one million consecutive frames from environments of increasing complexity. Together, our datasets and investigations lay the groundwork for video generative models and world models that continuously learn from single-sensor video streams rather than from fixed, curated video datasets.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Zhongwen_Xu1
Submission Number: 6407
Loading