Classification Drives Geographic Bias in Street Scene Segmentation

Published: 2025, Last Modified: 12 Nov 2025CVPR Workshops 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Previous studies showed that image datasets lacking geographic diversity can lead to biased performance in models trained on them. Most prior works have studied geo-bias in general-purpose image datasets (e.g., ImageNet, OpenImages) using simple tasks like image classification. Recent works have studied geo-biases in application-based image datasets like driving datasets. However, they have only focused on coarse-grained localization tasks like 2D or 3D detection. In this work, we investigated geo-biases in a Eurocentric driving dataset (Cityscapes) on the fine-grained localization task of instance segmentation. Consistent with previous work, we found that instance segmentation models trained on European driving scenes (Eurocentric models) were geo-biased. Interestingly, we found that geo-biases came from classification errors rather than localization errors, with classification errors alone contributing 10-90 % of the geo-biases in segmentation and 19-88 % of the geo-biases in detection. Our findings suggest that if a user wants to directly apply region-specific models (e.g., Eurocentric models) globally, they may prefer to coarsen label categories (e.g., use a common label like 4-wheelers over labels like car, bus, and truck). Coarser labels can reduce classification errors, which, as we show in this work, is a major contributor to geo-bias.
Loading