Keywords: image matching, datasets, UAV localization, benchmark
TL;DR: We release AerialExtreMatch, a 1.5M-pair dataset enabling robust UAV-to-satellite image matching and localization under extreme viewpoint changes.
Abstract: Image matching serves as a core component for UAV localization guided by satellite imagery. However, this task remains highly challenging due to the extreme viewpoint discrepancies between low-altitude UAV images and nadir-view satellite maps. Existing datasets primarily focus on ground-level or high-altitude UAV imagery, lacking sufficient coverage of the geometric variations typical of real aerial scenarios. To address these limitations, we introduce \textbf{AerialExtreMatch}, a large-scale, high-fidelity dataset tailored for extreme-view image matching and UAV localization. It consists of approximately 1.5 million synthetic image pairs rendered from high-quality 3D scenes, simulating diverse UAV and satellite viewpoints to enable robust training of image matching models. To support fine-grained evaluation, we construct a hierarchical benchmark with 32 difficulty levels. These are defined using three geometric criteria: overlap ratio, scale variation, and pitch difference. In addition, we collect a real-world UAV localization dataset with geo-aligned reference maps of varying visual quality. Extensive experiments involving 16 representative detector-based and detector-free methods demonstrate that models trained on AerialExtreMatch achieve substantial performance gains in both image matching and real-world localization under extreme-view conditions. The dataset and code will be released upon acceptance.
Primary Area: datasets and benchmarks
Submission Number: 14861
Loading