A Robust and Real-Time RGB-D SLAM Method with Dynamic Point Recognition and Depth Segmentation Optimization

Published: 01 Jan 2024, Last Modified: 05 Mar 2025PRCV (9) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Simultaneous localization and mapping (SLAM), as one of the core prerequisite technologies for intelligent mobile robots, has attracted much attention in recent years. However, the applicability of SLAM algorithms in practical scenarios is limited due to the strict assumptions of static environments. Although many recent SLAM systems have been devoted to introducing semantic segmentation or object detection schemes to identify dynamic regions, these methods fail to detect regions with unknown semantics and are highly time-consuming. To address the abovementioned problems, we propose a robust and real-time RGB-D SLAM method with dynamic point recognition and depth segmentation optimization. Specifically, we first plan a self-adaptive feature point tracking scheme based on sparse optical flows, which accelerates the feature point tracking process and avoids local optima. Then, we design a dynamic feature point recognition model that uses motion information and spatial distribution patterns to distinguish between dynamic and static point clusters. Finally, we exploit a depth segmentation optimization scheme to recover the misclassified feature points, which further improves the SLAM performance of the model. The experimental results obtained from a comparison between our method and several state-of-the-art (SOTA) models demonstrate that the proposed method achieves the best performance among the geometry-based methods and performs competitively relative to the deep learning-based models, especially in highly dynamic environments.
Loading