An Empirical Study of Gendered Stereotypes in Emotional Attributes for Bangla in Multilingual Large Language Models

ACL ARR 2024 June Submission5589 Authors

16 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: With the rapid growth of Large Language models, more and more jobs are being automated by using LLMs. Therefore, it is very important to assess the fairness of LLMs. Studies reveal the reflection of societal norms and biases in LLMs, which creates a risk of propagating societal stereotypes in downstream tasks. Numerous works have been done in various NLP applications regarding bias exhibition via LLMs and more so on gender bias. However there is a gap on the study of bias in emotional attributes, although human emotion and gender are closely related in societal discourse for almost all societies. The gap is even larger for a low resource language like Bangla. Historically women were more associated with emotional responses like empathy, fear or guilt, whereas men were more associated with anger, bravado, authority etc. This resonates with the societal system in areas where Bangla is prevalent. We offer the first thorough investigation of gendered emotion attribution in Bangla for both closed and open source LLMs in this work. Our aim is to elucidate the intricate societal relationship between gender and emotion specifically within the context of Bangla. All of our resources including code and data will be publicly available to support future research on Bangla NLP.
Paper Type: Short
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: Emotion attribution, Biasness, LLM, Prompt tuning, Generative AI, Ethical AI
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data analysis
Languages Studied: Bangla
Submission Number: 5589
Loading