Keywords: Gaussian Splatting, Dynamic-Static Decomposition, Egocentric data, Dynamic Scene Modeling, Distractor-free Scene Modeling
TL;DR: With Gaussian Splatting based self-supervised dynamic-static decomposition, DeGauss achieves SOTA distractor-free static scene reconstruction from occluded input and yields high-quality & Efficient dynamic scene representation.
Abstract: Reconstructing clean, distractor-free 3D scenes from real-world captures remains a significant challenge, particularly in highly dynamic and cluttered settings such as egocentric videos. To tackle this problem, we introduce DeGauss, a simple and robust self-supervised framework for dynamic scene reconstruction based on a decoupled dynamic-static Gaussian Splatting design. DeGauss models dynamic elements with foreground Gaussians and static content with background Gaussians, using a probabilistic mask to coordinate their composition and enable independent yet complementary optimization. DeGauss generalizes robustly across a wide range of real-world scenarios, from casual image collections to long, dynamic egocentric videos, without relying on complex heuristics or extensive supervision. Experiments on benchmarks including NeRF-on-the-go, ADT, AEA, Hot3D, and EPIC-Fields demonstrate that DeGauss consistently outperforms existing methods, establishing a strong baseline for generalizable, distractor-free 3D reconstructionin highly dynamic, interaction-rich environments.
Submission Number: 7
Loading