Keywords: Fairness, Geographical Bias;Model Evaluation
Abstract: The widespread adoption of AI models, especially foundation models (FMs), has made a profound impact on numerous domains. However, it also raises significant ethical concerns, including bias issues.
Although numerous efforts have been made to quantify and mitigate social bias in AI models,
**geographic bias** (in short, geo-bias) receives much less attention, which presents unique challenges.
While previous work has explored ways to quantify geo-bias, these measures are *model-specific* (e.g., mean absolute deviation of LLM ratings) or *spatially implicit* (e.g., average fairness scores of all spatial partitions).
We lack a **model-agnostic, universally applicable, and spatially explicit** geo-bias evaluation framework that allows researchers to fairly compare the geo-bias of different AI models and to understand what spatial factors contribute to the geo-bias.
In this paper, we establish an **information-theoretic framework for geo-bias evaluation**, called **GeoBS** (**Geo**-**B**ias **S**cores). We demonstrate the generalizability of the proposed framework by showing how to interpret and analyze existing geo-bias measures under this framework. Then, we propose three novel geo-bias scores that explicitly take intricate spatial factors (multi-scalability, distance decay, and anisotropy) into consideration.
Finally, we conduct extensive experiments on 3 tasks, 8 datasets, and 8 models to demonstrate that both task-specific GeoAI models and general-purpose foundation models may suffer from various types of geo-bias.
This framework will not only advance the technical understanding of geographic bias but will also establish a foundation for integrating spatial fairness into the design, deployment, and evaluation of AI systems.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 15119
Loading