Toward Accurate, Efficient, and Robust RGB-D Simultaneous Localization and Mapping in Challenging Environments

Published: 2025, Last Modified: 26 Jan 2026IEEE Trans. Robotics 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Visual simultaneous localization and mapping (SLAM) is crucial to many applications such as self-driving vehicles and robot tasks. However, it is still challenging for existing visual SLAM approaches to achieve good performance in low-texture or illumination-changing scenes. In recent years, some researchers have turned to edge-based SLAM approaches to deal with the challenging scenes, which are more robust than feature-based and direct SLAM methods. Nevertheless, existing edge-based methods are computationally expensive and inferior than other visual SLAM systems in terms of accuracy. In this study, we propose EdgeSLAM, a novel RGB-D edge-based SLAM approach to deal with challenging scenarios that is efficient, accurate, and robust. EdgeSLAM is built on two innovative modules: efficient edge selection and adaptive robust motion estimation. The edge selection module can efficiently select a small set of edge pixels, which significantly improves the computational efficiency without sacrificing the accuracy. The motion estimation module improves the system’s accuracy and robustness by adaptively handling outliers in motion estimation. Extensive experiments were conducted on technical university of munich (TUM) RGBD, imperial college london (ICL)-National University of Ireland Maynooth (NUIM), and ETH zurich 3D reconstruction (ETH3D) datasets, and experimental results show that EdgeSLAM significantly outperforms five state-of-the-art methods in terms of efficiency, accuracy, and robustness, which achieves 29.17% accuracy improvements with a high processing speed of up to 120 frames/s and a high positioning success rate of 97.06%.
Loading