3D Point Cloud-based Scene Understanding for Dynamic Large Scale Environment

Published: 2023, Last Modified: 13 Nov 2024RCAR 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Most simultaneous localization and mapping (SLAM) techniques assume static environments, disregarding the influence of moving objects on SLAM results and hindering loop-closure detection (LCD). Motion object tracking (MOT) is crucial for planning and decision-making but is often performed separately. SLAM and MOT are essential components of scene understanding. To address these issues, this study presents an innovative approach that integrates LiDAR and IMU data to simultaneously tackle SLAM and MOT, enabling comprehensive scene understanding in complex urban driving scenarios. The proposed method directly utilizes the static point cloud for precise odometry estimation. Experimental evaluations on the KITTI dataset demonstrate the method’s superiority over state-of-the-art SLAM techniques in terms of trajectory and mapping accuracy. Additionally, the proposed method effectively detects and tracks moving objects in dynamic large-scale environments, thereby enhancing scene understanding capabilities.
Loading