Women Are Beautiful, Men Are Leaders: Gender Stereotypes in Machine Translation and Language Modeling
Abstract: We present GEST -- a new dataset for measuring gender-stereotypical reasoning in language models and machine translation systems. GEST contains samples for 16 gender stereotypes about men and women (e.g., Women are beautiful, Men are leaders) that are compatible with the English language and 9 Slavic languages. The definition of said stereotypes was informed by gender experts. We used GEST to evaluate English and Slavic masked LMs, English generative LMs, and machine translation systems. We discovered significant and consistent amounts of gender-stereotypical reasoning in almost all the evaluated models and languages. Our experiments confirm the previously postulated hypothesis that the larger the model, the more biased it usually is.
Paper Type: long
Research Area: Ethics, Bias, and Fairness
Contribution Types: Model analysis & interpretability, Data resources
Languages Studied: English,Belarusian,Russian,Ukrainian,Croatian,Serbian,Slovene,Czech,Polish,Slovak
0 Replies
Loading