DCGS: High-Fidelity Monocular Dynamic Scene Synthesis with Divide-and-Conquer Gaussian Splatting

Published: 29 Jan 2024, Last Modified: 02 Feb 2026OpenReview Archive Direct UploadEveryoneCC BY 4.0
Abstract: Recovering high-quality dynamic objects from monocular videos remains highly challenging, particularly under complex non-linear motions and photometric inconsistencies. To address this problem, we propose a novel Gaussian splatting–based framework for achieving high-quality view synthesis based on a divide-and-conquer strategy. First, a \textbf{Gl}obal \textbf{A}nd local \textbf{D}ecoupling transformation field built upon Chebyshev bases, whose orthogonal nature enables a natural decoupling of low-frequency global trends from high-frequency local dynamics, enhancing stability and imbuing the motion representation with clearer physical meaning. Extensive experiments on challenging monocular dynamic datasets demonstrate that DCGS surpasses state-of-the-art baselines in rendering quality, achieving significantly higher PSNR and SSIM than previous methods
Loading