Evaluating Model Bias Requires Characterizing its Mistakes

Published: 05 Mar 2024, Last Modified: 08 May 2024ICLR 2024 R2-FM Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: model bias, foundation models evaluation, performance disparity across subgroups
TL;DR: We propose a metric that accounts for systematic mismatches in prediction errors across subgroups to measure model bias
Abstract: The ability to benchmark model performance in the face of spurious correlations is important to both build better predictors and increase confidence that models are operating as intended. We demonstrate that characterizing, as opposed to simply quantifying, model mistakes across subgroups is pivotal to properly reflect model biases which are ignored by standard metrics such as accuracy gap. Inspired by the hypothesis testing framework, we introduce SkewSize, a principled and flexible metric that captures bias from mistakes in a model's predictions. It can be used in multi-class settings or generalised to the open vocabulary setting of generative foundation models. SkewSize is an aggregation of the effect size of the interaction between two categorical variables: the spurious variable representing the bias attribute the model's prediction. We demonstrate the utility of SkewSize in multiple settings including: standard vision models trained on synthetic data, and multimodal foundation models from the BLIP-2 family. In each case, the proposed SkewSize is able to highlight biases not captured by other metrics, while also providing insights on the impact of recently proposed techniques, such as instruction tuning.
Submission Number: 23
Loading