Benchmarking the Robustness of Cross-view Geo-localization Models

21 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Cross-view Geo-localization, Robustness Benchmark, Robustness Evaluation, Robustness Enhancement
TL;DR: This work represents the first comprehensive investigation into the robustness of cross-view geo-localization models and proposes the first robustness evaluation benchmark for cross-view geo-localization.
Abstract: This paper investigates the cross-view geo-localization task, which aims to compare ground query images with an aerial image database tagged with GPS coordinates to determine the capture location of ground images. This task holds considerable significance across multiple domains, including autonomous driving, robotic navigation, and 3D reconstruction. Despite recent notable performance improvements, existing models lack robustness against real-world environmental variations such as adverse weather conditions and sensor noise. This deficiency poses potential risks when integrating this task into safety-critical applications. To comprehensively evaluate the robustness of existing methods, this paper introduces the first benchmarks for evaluating the robustness of cross-view geo-localization models to real-world image corruptions. We applied 16 corruption types to a widely used public dataset, including CVUSA and CVACT, with 5 corruption severities per type, ultimately generating about 1.5 million corrupted images to study the robustness of different models. This study contributes by revealing the performance degradation of cross-view geo-localization models on corrupted images and provides user-friendly robustness evaluation benchmarks. Additionally, we introduce straightforward and effective robustness enhancement techniques (stylization and histogram equalization) to consistently improve the robustness of various models. The codes and benchmarks are available online.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3535
Loading