Keywords: 4D Reconstruction, Casual Video, Efficient
TL;DR: Reconstruct a casual video in less than 2 minutes
Abstract: Dynamic view synthesis has seen significant advances, yet reconstructing scenes
from uncalibrated, casual video remains challenging due to slow optimization and
complex parameter estimation. In this work, we present **Instant4D**, a monocular
reconstruction system that leverages native 4D representation to efficiently process
casual video sequences within minutes, without calibrated cameras or depth sensors.
Our method begins with geometric recovery through deep visual SLAM, followed
by grid pruning to optimize scene representation. Our design significantly reduces
redundancy while maintaining geometric integrity, cutting model size to under **10%**
of its original footprint. To handle temporal dynamics efficiently, we introduce a
streamlined 4D Gaussian representation, achieving a **30×** speed-up and reducing
training time to within two minutes, while maintaining competitive performance
across several benchmarks. Our method reconstruct a single video within 10
minutes on the Dycheck dataset or for a typical 200-frame video. We further
apply our model to in-the-wild videos, showcasing its generalizability. Our project
website is published at https://instant4d.github.io/.
Supplementary Material:  zip
Primary Area: Applications (e.g., vision, language, speech and audio, Creative AI)
Submission Number: 9476
Loading