LatentGS: Probabilistic Densification for Efficient, Compact, and Faster 3D Gaussian Splatting

Published: 02 Mar 2026, Last Modified: 15 Apr 2026ICLR 2026 Workshop World ModelsEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Novel-view synthesis; Rendering; 3D reconstruction; 3DGS; Gaussian splatting; World models
TL;DR: A scene reconstruction approach that is more efficient than existing approaches and can serve as a basis for representing world models.
Abstract: We present LatentGS, a variational reformulation of 3D Gaussian Splatting that replaces heuristic densification with a learned, probabilistic model. A Variational Autoencoder (VAE) learns the joint distribution of scene geometry, appearance, and uncertainty, enabling adaptive sampling of new Gaussians directly from the latent space. Placement is guided by a three-dimensional variant of a Laplacian-kernel penalty map, which targets regions of high spatial variation. The resulting scene is 30\%-90\% more compact than vanilla 3DGS with superior quality, leading to faster training and rendering without loss of fidelity. Moreover, post-training, the LatentGS can be used as a standalone densifier/refiner of the reconstructed 3D scene. Allowing users to further refine/add detail to selected regions of interest (ROI). Together, these contributions produce a compact, perceptually stable, and efficient 3D representation that advances the quality and scalability of Gaussian Splatting.
Submission Number: 30
Loading