Do Not Tell Me How to Feel: Uncovering Gendered Emotional Stereotypes in Language ModelsDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
Abstract: Recent research has shown how large language models (LLMs) reflect societal norms and biases. While gender bias in machine translation and other areas has been extensively researched, there is a surprising lack of research on gender bias in emotion analysis. However, gender and emotion are inextricably linked in societal discourse, and emotion recognition is a focal point for artificial intelligence (AI) regulation (European Commission, 2023). We address this gap by investigating four recent LLMs for their gendered emotional stereotypes and the implicit assumptions that underpin their predictions. We prompt them to predict emotional responses for different genders in English self-reports like "When I fell in love". All models consistently exhibit gendered stereotypes, associating females with sadness and males with anger. We consequently also identify gender biases when predicting emotions with these models. We find that they inherently rely on a binary gender framework. Our findings shed light on the complex societal interplay between language, gender, and emotion. Their replication in LLMs allows us to use those models to study the topic in detail, but raises questions about the predictive use of those same LLMs for emotion applications. In short: do we want those models to replicate societal stereotypes around gendered emotion?
Paper Type: long
Research Area: Ethics, Bias, and Fairness
Contribution Types: Model analysis & interpretability, Data analysis, Position papers
Languages Studied: English
0 Replies

Loading