B-score: Detecting biases in large language models using response history

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: LLMs become substantially less biased when allowed to view its own response history; we propose B-score a bias indicator score based on response history.
Abstract: Large language models (LLMs) often exhibit strong biases, e.g, against women or in favor of the number 7. We investigate whether LLMs would be able to output less biased answers when allowed to observe their prior answers to the same question in a multi-turn conversation. To understand which types of questions invite more biased answers, we test LLMs on our proposed set of questions that span 9 topics and belong to three types: (1) Subjective; (2) Random; and (3) Objective. Interestingly, LLMs are able to "de-bias" themselves in a multi-turn conversation in response to questions that seek a Random, unbiased answer. Furthermore, we propose B-score, a novel metric that is effective in detecting biases in Subjective, Random, Easy, and Hard questions. On MMLU, HLE, and CSQA, leveraging B-score substantially improves the verification accuracy of LLM answers (i.e, accepting LLM correct answers and rejecting incorrect ones) compared to using verbalized confidence scores or the frequency of single-turn answers alone. Code and data are available at: b-score.github.io.
Lay Summary: State-of-the-art AIs have been shown to be biased against a gender (female), a race (African) or biased towards a number (number 7 or 42) or even names. This phenomenon entails severe consequences in downstream applications. We discover that AI can actually reduce its own bias when allowed to observe its own previous answers to the same question. This is similar to how a human might realize they're being unfair after reviewing their past decisions. Based on the difference in how AIs answer with or without observing their response history, we propose B-score, a metric that measures how an answer by AI may be biased (for or against a choice). For example, B-score can detect when a model heavily prefers the number 7 or “Biden” despite being asked to choose a random number or a random name between Biden or Trump. Letting AIs observe its response history has multiple effects: (1) successfully reduces bias in questions asking for a random choice (e.g., a number or a name Biden vs. Trump); (2) letting AIs think twice on the hard questions where it cannot easily generate a correct answer; and (3) reveal the real subjective opinion of a model in the questions that seek subjective preferences.
Link To Code: https://b-score.github.io
Primary Area: Social Aspects->Accountability, Transparency, and Interpretability
Keywords: Large language models; bias; frequency; de-biasing
Submission Number: 6690
Loading