Which Cultural Lens Do Models Adopt? Unmasking Cultural Positioning Bias in Large Language Model-Generated Interview Scripts
Abstract: Advancements in Large language models (LLMs) have enabled a variety of downstream applications like story and interview script generation.
However, recent research raised concerns about culture-related fairness issues in LLM-generated content.
We investigate bias in LLMs' cultural positioning, or the default to aligning with the viewpoint of mainstream, in particular, US culture, in their generations.
To this end, we propose the **CultureLens** benchmark for assessing cultural bias in LLMs through the lens of **culturally situated interview script generation**.
**CultureLens** consists of 4,000 diverse generation prompts that position an LLM as an on-site reporter interviewing local people across ten diverse cultures.
We examine cultural alignment in model outputs using an LLM judge, which detects whether the interviewer’s transcript reads as "external", or an "outsider", to the interviewee’s culture.
To quantify the extent of cultural positioning bias, we propose a test suite with 3 different metrics to measure the deviation in externality levels for different cultures.
Evaluation on 4 state-of-the-art LLMs reveals systematic biases: all models demonstrate an overwhelming tendency (> 90% averaged) to take an insider tone for United States, whereas proning to speak as an "outsider" in non-mainstream cultures like in Papua New Guinea.
To resolve observed biases, we propose Fairness Intervention Pillars (FIP), a mitigation pipeline that reduces bias by conditioning model generations on task-specific fine-grained fairness pillars.
Empirical results show remarkable improvement in positioning fairness between dominant and non-mainstream cultures.
Paper Type: Long
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: Bias, Culture, LLM, Generation
Contribution Types: Model analysis & interpretability, Data resources, Data analysis
Languages Studied: English
Submission Number: 6686
Loading