A Dual-Layered Evaluation of Geopolitical and Cultural Bias in LLMs

Published: 22 Jun 2025, Last Modified: 27 Jun 2025ACL-SRW 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: bias, cultural, geopolitical disputes, model bias, inference bias, QA, multilingual evaluation, large language models
TL;DR: We present a two-phase evaluation of LLM bias using a multilingual dataset, revealing how query language and training data influence model behavior in factual and disputable contexts.
Abstract: As large language models (LLMs) are increasingly deployed across diverse linguistic and cultural contexts, understanding their behavior in both factual and disputable scenarios is essential—especially when their outputs may shape public opinion or reinforce dominant narratives. In this paper, we define two types of bias in LLMs: model bias (bias stemming from model training) and inference bias (bias induced by the language of the query), through a two-phase evaluation. Phase 1 evaluates LLMs on factual questions where a single verifiable answer exists, assessing whether models maintain consistency across different query languages. Phase 2 expands the scope by probing geopolitically sensitive disputes, where responses may reflect culturally embedded or ideologically aligned perspectives. We construct a manually curated dataset spanning both factual and disputable QA, across four languages and question types. The results show that Phase 1 exhibits query language-induced alignment, while Phase 2 reflects an interplay between the model's training context and query language. This paper offers a structured framework for evaluating LLM behavior across neutral and sensitive topics, providing insights for future LLM deployment and culturally-aware evaluation practices in multilingual contexts. WARNING: this paper covers East Asian issues which may be politically sensitive.
Student Status: pdf
Archival Status: Archival
Paper Length: Long Paper (up to 8 pages of content)
Submission Number: 119
Loading