We introduce Diff4Splat, a feed-forward method that synthesizes controllable and explicit 4D scenes from a single image.
Our approach unifies the generative priors of video diffusion models with geometry and motion constraints learned from large-scale 4D datasets.
Given a single input image, a camera trajectory, and an optional text prompt, Diff4Splat directly predicts a deformable 3D Gaussian field that encodes appearance, geometry, and motion, all in a single forward pass, without test-time optimization or post-hoc refinement.
At the core of our framework lies a video latent transformer, which augments video diffusion models to jointly capture spatio-temporal dependencies and predict time-varying 3D Gaussian primitives.
Training is guided by objectives on appearance fidelity, geometric accuracy, and motion consistency, enabling Diff4Splat to synthesize high-quality 4D scenes in 30 seconds.
We demonstrate the effectiveness of Diff4Splat across video generation, novel view synthesis, and geometry extraction, where it matches or surpasses optimization-based methods for dynamic scene synthesis while being significantly more efficient.
The code and pre-trained model will be released.
The network architecture of Diff4Splat. We present a high-fidelity explicit 4D scene generation method from single images through four key innovations: video diffusion latents processed by our novel Transformer enabling dynamic 3DGS deformation, unified supervision with photometric, geometric, and motion losses, and progressive training for robust geometry and texture.
| Input Image | Ours (feed-forward) |
MoSca (test-time optimization) |
![]() |
||
![]() |
||
![]() |
||
![]() |
||
![]() |
||
![]() |