Mitigating Spatial Redundancy: A Predictive Compression Framework for 3D Gaussian Splatting

ICLR 2026 Conference Submission15771 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: 3D Gaussian Splatting, Compression, Prediction
Abstract: 3D Gaussian Splatting (3DGS) has emerged as a promising framework for Novel View Synthesis (NVS) due to its superior rendering quality and real-time performance. However, its widespread adoption is hindered by the substantial storage and transmission costs associated with the massive number of primitives. Notably, existing 3DGS compression approaches encode every primitive in its entirety, failing to utilize spatial continuity to compress shared content across primitives. In this work, we propose \textbf{Predict-GS}, a predictive compression framework for anchor-based Gaussian to mitigate spatial redundancy among anchors. Specifically, we construct a Spatial Feature Pool (SFP) based on a hybrid representation of multi-resolution 3D grids and 2D planes, which serves to predict coarse Gaussians for scene reconstruction. To refine these predictions, we introduce a residual compensation module equipped with a Multi-head Gaussian Residual Decoder (MGRD) that models corrections for shape and appearance, thereby transforming coarse Gaussians into high-fidelity ones. Furthermore, we revisit the inherent characteristics of our framework and design a prediction-tailored progressive training strategy to enhance its effectiveness. Extensive experiments on public benchmarks demonstrate the effectiveness of our framework, achieving a remarkable size reduction of over 58× compared to vanilla 3DGS on Mip-NeRF360 and outperforming the state-of-the-art (SOTA) compression method.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 15771
Loading