MoRe4D: Joint 3D Motion Generation and Geometry Reconstruction for 4D Synthesis from a Single Image

10 Sept 2025 (modified: 14 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: 4D generation, dense point track, video diffusion model
Abstract: Generating interactive, dynamic 4D scenes from *a single static image* remains a core challenge. Most existing *generate-then-reconstruct* and *reconstruct-then-generate* methods decouple geometry from motion, causing spatiotemporal inconsistencies and poor generalization. To overcome these limitations, we extend the reconstruct-then-generate framework to jointly couple \textbf{Mo}tion generation with geometric **Re**construction for **4D** Synthesis (**MoRe4D**). We first introduce TrajScene-60K, a large-scale dataset of 60,000 video samples with dense point trajectories, addressing the scarcity of high-quality 4D scene data. Based on this, we propose a diffusion-based 4D Scene Trajectory Generator (4D-STraG) to jointly generate geometrically consistent and motion-plausible 4D point trajectories. To leverage single-view priors, we design a depth-guided motion normalization strategy and a motion-aware module in 4D-STraG for effective geometry and dynamics integration. We then propose a 4D View Synthesis Module (4D-ViSM) to render videos with arbitrary camera trajectories from 4D point track representations. Extensive experiments show that MoRe4D generates high-quality 4D scenes with multi-view consistency and rich dynamic details from a single image.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 3657
Loading