Abstract: Despite the significant progress in 6-DoF visual localization, researchers are mostly driven by ground-level benchmarks. Compared with aerial oblique photography capture, ground-level map collection lacks scalability and complete coverage. In this work, we propose to go beyond the traditional ground-level setting and exploit cross-view 6-DoF localization from aerial to ground. We address this problem by formulating camera pose estimation as an iterative render-and-compare pipeline and enhancing the algorithm robustness through augmenting seeds from noisy initial priors. As no public dataset exists for the studied problem, we have collected a new dataset that provides a variety of cross-view images from smartphones and low-altitude drones and developed a semi-automatic system to acquire ground-truth poses for query images. We benchmark our method as well as several state-of-the-art baselines and demonstrate that our method outperforms other approaches by a large margin. Code is available at https://github.com/Choyaa/Render2Loc.
Loading