From Body to Mind: Analyzing Gender Representation in Spanish Generative Language Models

ACL ARR 2025 May Submission990 Authors

16 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models (LLMs) have profoundly transformed Natural Language Processing (NLP), exhibiting remarkable fluency and versatility in text generation. However, these sophisticated models can inadvertently perpetuate and even amplify societal biases present in their training data. This study presents a comprehensive analysis of gender bias within Spanish generative LLMs, specifically examining the adjectival descriptions applied to men and women. Through the use of meticulously designed prompts and a Supersenses-based framework for categorizing adjectives into distinct semantic domains, our research uncovers significant patterns indicative of cultural stereotypes. These findings are consistent with prior work on masked language models. Additionally, this paper explores the correlation between model size and the magnitude of observed gender biases.
Paper Type: Long
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: model bias/fairness evaluation, language/cultural bias analysis, prompting, generative models, multilingualism
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data analysis
Languages Studied: Spanish
Keywords: Gender bias, Spanish language models, Bias evaluation
Submission Number: 990
Loading