Keywords: fairness, bias transfer hypothesis, prompt adaptation, large language models, coreference resolution
TL;DR: This study reveals that co-reference resolution biases persist in causal models despite prompt adaptation, emphasizing the importance of ensuring base fairness in pre-trained LLMs for downstream prompting tasks.
Abstract: Large language models (LLMs) are increasingly being adapted to new tasks and deployed in real-world decision systems. Several previous works have investigated the bias transfer hypothesis (BTH) and find that fairness of pre-trained masked language models has limited effect on the fairness of these models when adapted using fine-tuning. In this work, we expand the study of BTH to causal models under prompt adaptations, as prompting is an accessible, and compute-efficient way to deploy models in real-world systems. In contrast to previous work, we establish that intrinsic biases in pre-trained Mistral, Falcon and Llama models are strongly correlated (rho >= 0.94) with biases when the same models are zero- and few-shot prompted, using a pronoun co-reference resolution task. Further, we find that biases remain strongly correlated even when LLMs are specifically pre-prompted to exhibit fair or biased behavior (rho >= 0.92), and also when varying few shot composition parameters such as sample size, stereotypical content, occupational distribution and representational balance (rho >= 0.90). Our findings highlight the importance of ensuring fairness in pre-trained LLMs, especially when they are later used to perform downstream tasks via prompt adaptation.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8170
Loading