LICFM3-SLAM: LiDAR-Inertial-Camera Fusion and Multimodal Multilevel Matching for Bionic Quadruped Inspection Robot Mapping
Abstract: In comparison to wheeled robots, the locomotion of bionic quadruped robots is more vigorous. Mapping systems should maintain satisfactory robustness and accuracy in various complex real-world scenarios, even when the robot’s body experiences intense shaking. To address these challenges, this study proposes a simultaneous localization and mapping (SLAM) system based on LiDAR-inertial-camera fusion and a multimodal multilayer matching algorithm (LICFM3-SLAM). First, a tightly coupled strategy is utilized to fuse LiDAR, inertial, and camera information, introducing a visual-inertial odometry (VIO) subsystem based on adaptive graph inference; thus, high-precision and robust robot state estimation is achieved. Second, inspired by human spatial cognition, the study proposes a multimodal multilayer matching algorithm and utilizes observation data obtained from the camera and LiDAR, thereby achieving accurate and robust data association. Finally, incremental poses are optimized using factor graph optimization methods; thus, a globally consistent 3-D point cloud map is constructed. The proposed system is tested on a public benchmark dataset and applied to a bionic quadruped inspection robot (BQIR), and experiments are conducted in various challenging indoor and outdoor large-scale scenarios. The results reveal that LICFM3-SLAM exhibits high robustness and mapping accuracy while meeting real-time requirements.
Loading