VLMs as GeoGuessr Masters: Exceptional Performance, Hidden Biases, and Privacy Risks

ACL ARR 2025 February Submission4002 Authors

15 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Visual-Language Models (VLMs) have shown remarkable performance across various tasks, particularly in recognizing geographic information from images. However, significant challenges remain, including biases and privacy concerns. To systematically address these issues in the context of geographic information recognition, we introduce a benchmark dataset consisting of 1,200 images paired with detailed geographic metadata. Evaluating four VLMs, we find that while these models demonstrate the ability to recognize geographic information from images, achieving up to $53.8\%$ accuracy in city prediction, they exhibit significant regional biases. Specifically, performance is substantially higher for economically developed and densely populated regions compared to less developed ($-12.5\%$) and sparsely populated ($-17.0\%$) areas. Moreover, the models exhibit regional biases, frequently overpredicting certain locations; for instance, they consistently predict Sydney for images taken in Australia. The strong performance of VLMs also raises privacy concerns, particularly for users who share images online without the intent of being identified. The code and dataset are provided in the supplementary materials and will be publicly available upon publication.
Paper Type: Long
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: Geoloction,Fairness,Vision Language Model
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 4002
Loading