Conquering Textureless with RF-referenced Monocular Vision for MAV State EstimationDownload PDFOpen Website

Published: 01 Jan 2021, Last Modified: 17 May 2023ICRA 2021Readers: Everyone
Abstract: The versatile nature of agile micro aerial vehicles (MAVs) poses fundamental challenges to the design of robust state estimation in various complex environments. Achieving high-quality performance in textureless scenes is one of the missing pieces in the puzzle. Previously proposed solutions either seek a remedy with visual loop closure or leverage RF localizability with inferior accuracy. None of them support accurate MAV state estimation in textureless scenes. This paper presents RFSift, a new state estimator that conquers the textureless challenge with RF-referenced monocular vision, achieving centimeter-level accuracy in textureless scenes. Our key observation is that RF and visual measurements are tied up with pose constraints. Mapping RF to feature quality and sift well-matched ones significantly improves accuracy. RFSift consists of 1) an RF-sifting algorithm that maps 3D UWB measurements to 2D visual features for sifting the best features; 2) an RF-visual-inertial sensor fusion algorithm that enables robust state estimation by leveraging multiple sensors with complementary advantages. We implement the prototype with off-the-shelf products and conduct large-scale experiments. The results demonstrate that RFSift is robust in textureless scenes, 10x more accurate than the state-of-the-art monocular vision system. The code of RFSift is available at https://github.com/weisgroup/RFSift.
0 Replies

Loading