Keywords: World Model, Video Generation, 3D Genearation, 3D-aware video generation
TL;DR: FantasyWorld unifies video priors with geometric grounding in a feed-forward dual-branch model that emits video and 3D features in one pass, producing 3D-consistent worlds without per-scene optimization.
Abstract: High-quality 3D world models are pivotal for embodied intelligence and Artificial General Intelligence (AGI), underpinning applications such as AR/VR content creation and robotic navigation.
Despite the established strong imaginative priors, current video foundation models lack explicit 3D grounding capabilities, thus being limited in both spatial consistency and their utility for downstream 3D reasoning tasks.
In this work, we present FantasyWorld, a geometry-enhanced framework that augments frozen video foundation models with a trainable geometric branch, enabling joint modeling of video latents and an implicit 3D field in a single forward pass.
Our approach introduces cross-branch supervision, where geometry cues guide video generation and video priors regularize 3D prediction, thus yielding consistent and generalizable 3D-aware video representations.
Notably, the resulting latents from the geometric branch can potentially serve as versatile representations for downstream 3D tasks such as novel view synthesis and navigation, without requiring per-scene optimization or fine-tuning.
Extensive experiments show that FantasyWorld effectively bridges video imagination and 3D perception, outperforming recent geometry-consistent baselines in multi-view coherence and style consistency.
Ablation studies further confirm that these gains stem from the unified backbone and cross-branch information exchange.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 19383
Loading