Abstract: This paper presents V2DGS, a novel multi-sensor fusion reconstruction system designed to enhance 2D Gaussian Splatting (2DGS) for outdoor scene reconstruction. Our method integrates LiDAR, camera, and IMU within a SLAM framework to jointly estimate camera poses and construct a surfel-based visual voxel map. Unlike conventional image-only pipelines that rely on Structure-from-Motion (SfM), V2DGS leverages geometric and photometric priors derived from this voxel map, where each surfel encodes position, color, scale and orientation. These rich priors are used to initialize 2D Gaussian primitives, significantly improving convergence and reconstruction quality. In addition, the SLAM-estimated poses are further refined through global bundle adjustment to enhance overall consistency. Experiments on the FAST-LIVO dataset demonstrate that our approach outperforms other Gaussian-based methods, including 2DGS, 3DGS, and SuGaR, in terms of both geometric accuracy and rendering efficiency.
External IDs:dblp:conf/prcv/ZhangSDZL25
Loading