Fast SP-GS: Reconstructing Dynamic Scenes in Minutes

Diwen Wan, Jiaxiang Tang, Ruijie Lu, Yuxiang Wang, Gang Zeng

Published: 2025, Last Modified: 08 Mar 2026ISMAR 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Despite recent advances in Gaussian Splatting techniques-such as Superpoint Gaussian Splatting (SP-GS), which enables real-time, high-fidelity rendering-3D reconstruction of dynamic scenes remains a significant challenge in computer vision. However, SP-GS requires nearly an hour for dynamic scene optimization, severely limiting its practical applications in AR and VR. To address this limitation, we propose Fast SP-GS, an efficient approach that reduces training time to mere minutes. Building upon acceleration methods for static scenes (e.g., Mini-Splatting, Taming 3DGS, FlashGS), our novel 2D-GS-based framework enhances speed and quality via three key innovations: First, an aggressive 2D-GS densification strategy reduces required training iterations, while a Gaussian simplification strategy minimizes redundant parameters. Second, a novel 2D-GS optical flow loss provides explicit motion supervision, accelerating convergence. Third, an optimized CUDA implementation maximizes rendering efficiency. Extensive experiments on synthetic and real-world datasets confirm that Fast SP-GS reconstructs dynamic scenes in minutes, surpassing SP-GS in both rendering quality and computational efficiency. The source code is available at https://github.com/dnvtmf/Fast-SP-GS.
Loading