Exploiting Selection Bias on Underspecified Tasks in Large Language ModelsDownload PDF

03 Oct 2022 (modified: 03 Nov 2024)CML4ImpactReaders: Everyone
Keywords: causal inference, uncertainty, large language models, selection bias, collider bias
TL;DR: Using causal inference methods, we explain and demonstrate how spurious associations caused by selection bias can be used to classify prediction tasks as underspecified during inference.
Abstract: In this paper we motivate the causal mechanisms behind sample selection induced collider bias (selection collider bias) that can cause Large Language Models (LLMs) to learn unconditional dependence between entities that are unconditionally independent in the real world. We show that selection collider bias can become amplified in underspecified learning tasks, and although difficult to overcome, we describe a method to exploit the resulting spurious correlations for determination of when a model may be uncertain about its prediction. We demonstrate an uncertainty metric that matches human uncertainty in tasks with gender pronoun underspecification on an extended version of the Winogender Schemas evaluation set, and we provide online demos where users can evaluate spurious correlations and apply our uncertainty metric to their own texts and models. Finally, we generalize our approach to address a wider range of prediction tasks.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/exploiting-selection-bias-on-underspecified/code)
0 Replies

Loading