Abstract: Visual SLAM has, in general, a high computational footprint. Its potential applications such as augmented reality (AR), virtual reality (VR) and robotics have hard real-time constraints and limited computational resources. Reducing the cost of visual SLAM systems is hence essential to equip small robots and AR/VR devices with such technology. Feature extraction, description and matching is at the core of feature-based SLAM systems, having a direct impact in their performance. This work presents a thorough experimental analysis of feature detectors, descriptors and matchers for visual SLAM, focusing on their cost and their effect in the estimation accuracy. We also run our visual SLAM system in an embedded platform (Odroid-XU4) and show the effect of using such limited hardware in the accuracy and cost of the system. Finally, in order to facilitate future research, our evaluation pipeline is made publicly available.
0 Replies
Loading