Relative Bias: A Comparative Approach for Quantifying Bias in LLMs

Published: 05 Jun 2025, Last Modified: 15 Jul 2025ICML 2025 Workshop TAIG PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Privacy, Security, Alignment, Bias Analysis, Fairness, AI Governance
TL;DR: We detect bias in a language model by comparing its answers a set of other baseline models on the same questions, focusing on how much they differ rather than relying on fixed definitions of bias.
Abstract: The growing deployment of large language models (LLMs) has amplified concerns regarding their inherent biases, raising critical questions about their fairness, safety, and societal impact. However, quantifying LLM bias remains a fundamental challenge, complicated by the ambiguity of what "bias" entails. This challenge grows as new models emerge rapidly and gain widespread use, while introducing potential biases that have not been systematically assessed. In this paper, we propose the Relative Bias framework, a method designed to assess how an LLM's behavior deviates from other LLMs within a specified target domain. We introduce two complementary evaluation methods: (1) Embedding Transformation analysis, which captures relative bias patterns through sentence representations over the embedding space, and (2) LLM-as-a-Judge, which employs an LLM to evaluate outputs comparatively. Applying our framework to several case studies on bias and alignment cases followed by statistical tests for validation, we find strong alignment between the two scoring methods, offering a systematic, scalable, and statistically grounded approach for comparative bias analysis in LLMs.
Submission Number: 51
Loading