Keywords: Gaussian Splatting, Stereo, Surface Reconstruction, Multiview
TL;DR: We propose to use a pre-trained stereo model as a mediator between noisy Gaussian Splatting clouds and smooth 3D surfaces: Gaussian Splatting -> Rendering stereo-aligned pairs -> Depths from stereo model -> Fusion -> Smooth 3D mesh
Abstract: Recently, 3D Gaussian Splatting (3DGS) has emerged as an efficient approach for accurately representing scenes.
However, despite its superior novel view synthesis capabilities, extracting the geometry of the scene directly from the Gaussian properties remains a challenge, as those are optimized based on a photometric loss.
While some concurrent models have tried adding geometric constraints during the Gaussian optimization process, they still produce noisy, unrealistic surfaces.
We propose a novel approach for bridging the gap between the noisy 3DGS representation and the smooth 3D mesh representation, by injecting real-world knowledge into the depth extraction process.
Instead of extracting the geometry of the scene directly from the Gaussian properties, we instead extract the geometry through a pre-trained stereo-matching model.
We render stereo-aligned pairs of images corresponding to the original training poses, feed the pairs into a stereo model to get a depth profile, and finally fuse all of the profiles together to get a single mesh.
The resulting reconstruction is smoother, more accurate and shows more intricate details compared to other methods for surface reconstruction from Gaussian Splatting, while only requiring a small overhead on top of the fairly short 3DGS optimization process.
We performed extensive testing of the proposed method on in-the-wild scenes, obtained using a smartphone, showcasing its superior reconstruction abilities.
Additionally, we tested the method on the Tanks and Temples and DTU benchmarks, achieving state-of-the-art results.
Submission Number: 24
Loading