Abstract: Following the advent of NeRFs, 3D Gaussian Splatting (3DGS) has paved the way to real-time neural rendering overcoming the
computational burden of volumetric methods. Several extensions of 3DGS have been proposed to achieve compressible and high-fidelity performance. However, by employing a geometry-agnostic optimization scheme,
these methods neglect the inherent 3D structure of the scene, thereby
restricting the expressivity and the quality of the representation, resulting in various floating points and artifacts. In this work, we propose a structure-aware Gaussian Splatting method (SAGS) that implicitly encodes the geometry of the scene, which reflects to state-of-the-art
rendering performance and reduced storage requirements on benchmark
datasets. SAGS is founded on a local-global graph representation that
facilitates the learning of complex scenes and enforces meaningful point
displacements that preserve the scene’s geometry. Additionally, we introduce a lightweight version of SAGS, using a simple yet effective mid-point
interpolation scheme, which showcases a compact representation of the
scene with up to 24× size reduction without the reliance on any compression strategies. Extensive experiments across multiple benchmark
datasets demonstrate the superiority of SAGS compared to state-of-theart 3D-GS methods under both rendering quality and model size. Besides, we demonstrate that our structure-aware method can effectively
mitigate floating artifacts and irregular distortions of previous methods
while obtaining precise depth maps.
Loading