Social Bias in Large Language Models For Bangla: An Empirical Study on Gender and Religious Bias

ACL ARR 2024 June Submission5829 Authors

16 Jun 2024 (modified: 08 Aug 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The rapid growth of Large Language Models (LLMs) has put forward the study of biases as a crucial field. It is important to assess the influence of different types of biases embedded in LLMs to ensure fair use in sensitive fields. Although there have been extensive works on bias assessment in English, such efforts are rare and scarce for a major language like Bangla. In this work, we examine two types of social biases in LLM generated outputs for Bangla language. Our main contributions in this work are: (1) bias studies on two different social biases for Bangla (2) a curated dataset for bias measurement benchmarking (3) two different probing techniques for bias detection in the context of Bangla. This is the first work of such kind involving bias assessment of LLMs for Bangla to the best of our knowledge. All our code and resources will be made publicly available for the progress of bias related research in Bangla NLP
Paper Type: Short
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: Ethics, Fairness, Bangla NLP, Gender Bias, Religious Bias, Prompting, LLM
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches to low-resource settings
Languages Studied: Bangla
Submission Number: 5829
Loading