Beyond Performance: Quantifying and Mitigating Label Bias in LLMsDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
TL;DR: A comprehensive investigation into the evaluation of label bias in LLMs, and a novel calibration method for mitigating it.
Abstract: Large language models (LLMs) have demonstrated impressive adaptability to diverse tasks, by relying only on context prompts containing instructions, or minimal input-output examples. However, recent work revealed they also exhibit *label bias*---an undesirable preference toward predicting certain answers over others. Still, detecting and measuring this bias reliably and at scale has remained relatively unexplored. In this study, we evaluate different approaches to quantifying label bias in a model's predictions, conducting a comprehensive investigation across 279 classification tasks and ten LLMs. Our investigation reveals substantial label bias in models both before and after debiasing attempts, as well as highlights the importance of outcomes-based evaluation metrics, which were not previously used in this regard. We further propose a novel label bias calibration method tailored for few-shot prompting, which outperforms recent calibration approaches for both improving performance and mitigating label bias. Nevertheless, our results emphasize that label bias in the predictions of LLMs remains a barrier to their reliability.
Paper Type: long
Research Area: Interpretability and Analysis of Models for NLP
Contribution Types: Model analysis & interpretability
Languages Studied: English
0 Replies

Loading