Evaluating Precise Geolocation Inference Capabilities of Vision Language Models

Published: 16 Dec 2024, Last Modified: 20 Feb 2025airrworkshop OralandPosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Vision Language Models, Image Geolocation, AI Privacy, AI Agents
TL;DR: We present a novel dataset and comprehensive benchmark of precise geolocation capabilities of VLMs and VLM agents.
Abstract: The prevalence of Vision-Language Models (VLMs) raises important questions about privacy in an era where visual information is increasingly available. While foundation VLMs demonstrate broad knowledge and learned capabilities, we specifically investigate their ability to infer geographic location from previously unseen image data. This paper introduces a benchmark dataset collected from Google Street View that represents its global distribution of coverage. Foundation models are evaluated on single-image geolocation inference, with many achieving median distance errors of $<300$ km. We further evaluate VLM "agents" with access to supplemental tools, observing up to a $38.2\%$ decrease in distance error. Our findings establish that modern foundation VLMs can act as powerful image geolocation tools, without being specifically trained for this task. When coupled with increasing accessibility of these models, our findings have greater implications for online privacy. We discuss these risks, as well as future work in this area. Datasets and code can be found at: https://anonymous.4open.science/r/location-inference-1611
Submission Number: 13
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview