Abstract: In this study, we explore potential biases in large language models (LLMs) from a novel perspective. We focus on detecting racial bias in texts generated by these models that describe the physical environments of diverse racial communities and the narratives of their inhabitants. Our study reveals statistically significant infrastructure biases in popular LLMs, including ChatGPT, Gemma and Llama 3, suggesting potential racial biases linked to built environment features.
Paper Type: Short
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: Ethics, Bias, and Fairness, Interpretability and Analysis of Models for NLP, Sentiment Analysis
Contribution Types: Model analysis & interpretability, Data analysis
Languages Studied: English
Submission Number: 5354
Loading