Versatile 3D Multi-Sensor Fusion for Lightweight 2D LocalizationDownload PDFOpen Website

2020 (modified: 16 Nov 2022)IROS 2020Readers: Everyone
Abstract: Aiming for a lightweight and robust localization solution for low-cost, low-power autonomous robot platforms, such as educational or industrial ground vehicles, under challenging conditions (e.g., poor sensor calibration, low lighting and dynamic objects), we propose a two-stage localization system which incorporates both offline prior map building and online multi-modal localization. In particular, we develop an occupancy grid mapping system with probabilistic odometry fusion, accurate scan-to-submap covariance modeling, and accelerated loop-closure detection, which is further aided by 2D line features that exploit the environmental structural constraints. We then develop a versatile EKF-based online localization system which optimally (up to linearization) fuses multi-modal information provided by the pre-built occupancy grid map, IMU, odometry, and 2D LiDAR measurements with low computational requirements. Importantly, spatiotemporal calibration between these sensors are also estimated online to account for poor initial calibration and make the system more "plug-and-play", which improves both the accuracy and flexibility of the proposed multi-sensor fusion framework. In our experiments, our mapping system is shown to be more accurate than the state-of-the-art Google Cartographer. Then, extensive Monte-Carlo simulations are performed to verify both accuracy, consistency and efficiency of the proposed map-based localization system with full spatiotemporal calibration. We also validate the complete system (prior map building and online localization) with building-scale real-world datasets.
0 Replies

Loading