Blind Men and the Elephant: Diverse Perspectives of Gender Bias in Stereotype BenchmarksDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: The multifaceted challenge of accurately measuring gender bias in language models is akin to discerning different segments of a broader, unseen entity. This short paper mainly focuses on intrinsic bias mitigation and measurement strategies for language models, building on prior research that demonstrate a lack of correlation between intrinsic and extrinsic approaches. We delve deeper into the realm of intrinsic measurements, identifying inconsistencies and positing that these metrics might reflect diverse facets of gender bias. Our methodology encompasses an analysis of data distribution across benchmarks coupled with the implementation of an intricate gender bias categorization derived from social psychology. Adjustments made to the distributions of the two datasets lead to significant enhancement in the alignment of their outcomes. Our findings not only underscore the complexity inherent in gender bias in language models but also forge new paths toward more refined techniques for bias detection and reduction.
Paper Type: short
Research Area: Ethics, Bias, and Fairness
Contribution Types: Data analysis
Languages Studied: English
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview