AirSLAM: An Efficient and Illumination-Robust Point-Line Visual SLAM System

Published: 01 Jan 2025, Last Modified: 15 May 2025IEEE Trans. Robotics 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In this article, we present an efficient visual simultaneous localization and mapping (SLAM) system designed to tackle both short-term and long-term illumination challenges. Our system adopts a hybrid approach that combines deep learning techniques for feature detection and matching with traditional back-end optimization methods. Specifically, we propose a unified convolutional neural network that simultaneously extracts keypoints and structural lines. These features are then associated, matched, triangulated, and optimized in a coupled manner. In addition, we introduce a lightweight relocalization pipeline that reuses the built map, where keypoints, lines, and a structure graph are used to match the query frame with the map. To enhance the applicability of the proposed system to real-world robots, we deploy and accelerate the feature detection and matching networks using C++ and NVIDIA TensorRT. Extensive experiments conducted on various datasets demonstrate that our system outperforms other state-of-the-art visual SLAM systems in illumination-challenging environments. Efficiency evaluations show that our system can run at a rate of $73\,\mathrm{Hz}$ on a PC and $40\,\mathrm{Hz}$ on an embedded platform.
Loading