Evaluating Gender Bias in the Translation of Gender-Neutral Educational Professions from English to Gendered Languages

ACL ARR 2024 August Submission161 Authors

14 Aug 2024 (modified: 18 Sept 2024)ACL ARR 2024 August SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract:

This study evaluates the translation of gender-neutral English words to the gendered languages German, French and Italian, using 5 machine translation (MT) models: GPT-3.5 Turbo, LLaMA 2, AWS, SYS, and Google. Focusing on translating educational professions, each model's output was categorized into four gender classifications: unknown (UNK), female (f), male (m), and neutral (n). Error rates were determined through human validation, involving manual review of randomly sampled records. Our findings reveal significant gender bias across all tested MT systems, with a notable over representation of male gender classifications.

Paper Type: Long
Research Area: Machine Translation
Research Area Keywords: Machine Translation, Gender bias, Educational professions
Contribution Types: Model analysis & interpretability
Languages Studied: English, German, French, Italian
Submission Number: 161
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview