Keywords: world models, memory, video diffusion models
Abstract: Video world models have attracted significant attention for their ability to produce high-fidelity future visual observations conditioned on past observations and navigation actions.
Temporally- and spatially-consistent, long-term world modeling has been a long-standing problem, unresolved with even recent state-of-the-art models, due to the prohibitively expensive computational costs for long-context inputs.
In this paper, we propose WorldPack, a video world model with efficient compressed memory, which significantly improves spatial consistency, fidelity, and quality in long-term generation despite much shorter context length.
Our compressed memory consists of trajectory packing and memory retrieval; trajectory packing realizes high conctext efficiency and memory retrieval maintains the consistency in rollouts and helps long-term generations that require spatial reasoning.
Our performance is evaluated with LoopNav, a benchmark on MineCraft, specialized for the evaluation of long-term consistency, and we verify that WorldPack notably outpeforms strong state-of-the-art models.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 12457
Loading