SLAM Based on Camera-2D LiDAR Fusion

Published: 01 Jan 2024, Last Modified: 12 Apr 2025ICRA 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The SLAM system plays a pivotal role in robotic mapping and localization, leveraging various sensor technologies to achieve precision. Traditional passive sensors, such as RGB cameras, offer high-resolution imagery at a lower cost for SLAM applications, yet they fall short in accurately estimating 3D positions and camera motions. On the other hand, LiDARs excel in generating accurate 3D maps but often come at a higher price and lower resolution. While active illumination sensors like LiDAR provide precise depth estimation, the prohibitive cost of high-resolution LiDAR systems restricts their widespread adoption across diverse applications. Although 2D single-beam LiDAR is more affordable, its limited depth sensing capability hampers comprehensive environmental perception. Addressing these limitations, this paper introduces a deep learning framework aimed at enhancing SLAM performance through the strategic fusion of camera and 2D LiDAR data. Our approach employs a novel self-supervised network alongside an economical single-beam LiDAR, striving to achieve or surpass the performance of more expensive LiDAR systems. The integration of single-beam LiDAR with our system allows for dynamic adjustment of scale uncertainty in depth maps generated by monocular camera systems within SLAM. Consequently, this fusion method enjoys the high-resolution and accuracy benefits of advanced LiDAR systems with the cost-effectiveness of 2D LiDAR sensors. Through this innovative combination, we demonstrate a SLAM system that not only maintains high fidelity in mapping and localization but also ensures affordability and broad applicability.
Loading