Towards Region-aware Bias Evaluation MetricsDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: When exposed to human-generated data, language models are known to learn and amplify societal biases. While previous work has introduced benchmarks that can be used to assess the bias in these models, they rely on assumptions that may not be universally true. For instance, a bias dimension commonly used by these metrics is that of {\it family--career}, but this may not be the only common bias in certain regions of the world. In this paper, we identify topical differences in gender bias across different regions and propose a region-aware bottom-up approach for bias assessment. Our proposed approach uses gender-aligned topics for a given region and identifies gender bias dimensions in the form of topic pairs that are likely to capture gender societal biases. Several of our proposed bias dimensions are on par with human perception of gender biases in these regions in comparison to the existing ones, and we also identify new dimensions that are more aligned than the existing ones.
Paper Type: short
Research Area: Ethics, Bias, and Fairness
Contribution Types: Model analysis & interpretability, Data analysis, Surveys
Languages Studied: English
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview