A Multilingual Social Bias Benchmark Incorporating Thinking Processes

ACL ARR 2026 January Submission9001 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Social bias, chain-of-thought, multilingual
Abstract: Large Language Models (LLMs) can learn both useful knowledge and harmful stereotypes, making bias evaluation essential. Existing frameworks fall into two types: those considering reasoning steps (Thinking Process-Aware Evaluation, TPAE) and those focusing only on final outputs (Straight-to-the-Answer Evaluation, SAE). Prior TPAE studies showed effectiveness in assessing gender bias but relied on template-based, word-counting prompts, limiting generalization to other bias types, languages, and reasoning-based methods. In this study, we introduce MBTP, a multilingual social bias benchmark that incorporates human-generated pro- and anti-stereotype reasoning as part of the thinking process, and propose a few-shot meta-evaluation method that enables scalable bias assessment without model fine-tuning. From experiments evaluating 13 social bias categories across 8 languages, we find that human-generated thinking consistently yields higher-quality evaluations than LLM-generated or template-based approaches. Furthermore, TPAE demonstrates superior performance over SAE, highlighting the importance of considering reasoning processes in bias evaluation. We will release the MBTP dataset upon paper acceptance.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: Social bias, chain-of-thought, multilingual
Contribution Types: Model analysis & interpretability, Data resources
Languages Studied: English, Japanese, Chinese, French, German, Spanish, Arabic, Russian
Submission Number: 9001
Loading