What's Not Said Still Hurts: A Description-Based Evaluation Framework for Measuring Social Bias in LLMs

ACL ARR 2025 May Submission1252 Authors

16 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models (LLMs) often exhibit social biases inherited from their training data. While existing benchmarks evaluate bias by term-based mode through direct term associations between demographic terms and bias terms, LLMs have become increasingly adept at avoiding biased responses, leading to low levels of bias. However, biases persist in subtler, contextually hidden forms that traditional benchmarks fail to capture. We introduce the Description-based Bias Benchmark (DBB), a novel dataset designed to assess bias at the semantic level that bias concepts are hidden within naturalistic, subtly framed contexts in real-world scenarios rather than superficial terms. We analyze six state-of-the-art LLMs, revealing that while models reduce bias in response at the term level, they continue to reinforce biases in nuanced settings. Data, code, and results are available at \url{https://anonymous.4open.science/r/Hidden-Bias-Benchmark-A84F/}.
Paper Type: Long
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: data ethics; model bias/fairness evaluation; ethical considerations in NLP applications
Contribution Types: Data resources, Data analysis
Languages Studied: English
Submission Number: 1252
Loading