PGDGS: Improving Few-shot 3D Gaussian Splatting with Progressive Gaussian Densification

Published: 2025, Last Modified: 20 Oct 2025ICASSP 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Synthesizing novel views from sparse input images is a significant and challenging problem in neural rendering. As an innovative 3D representation, 3D Gaussian Splatting (3DGS) has demonstrated exceptional performance and real-time rendering capabilities. However, rendering novel views from few-shot inputs in 3DGS remains a formidable problem. To address this challenge, we propose PGDGS, a highly efficient method that surpasses existing methods with minimal modifications to the original 3DGS framework. We conducted a detailed analysis of the challenges encountered by 3DGS in sparse settings and identified the crucial role played by the Gaussian densification strategy during the training process. Building upon this observation, we propose two strategies: the Progressive Gaussian Densification strategy that reconstructs Gaussians from coarse to fine, and the Dynamic Frequency Regularization strategy that enhances the details of the reconstruction. We demonstrate that original 3DGS can achieve performance comparable to existing methods with only a few lines of code change. Notably, our approach achieves a 120 times improvement in training speed and a 4000 times increase in inference speed compared to previous NeRF-based methods due to its simplicity and minimal impact on training overhead. PGDGS achieves state-of-the-art performance across diverse datasets, including LLFF and Mip-NeRF360.
Loading