Bias Exposed: The BiaXposer Framework for NLP Fairness

Published: 01 Jan 2024, Last Modified: 20 May 2025ICSOC (1) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Natural Language Processing models often exhibit harmful social biases, leading to discrimination against different demographics. Assessing the fairness of these models has thus become a critical area of research, resulting in the development of various bias metrics. However, many of these metrics have been criticized for being brittle, opaque, and sometimes contradictory, creating confusion among practitioners regarding which metrics to trust and use in different contexts. This paper introduces BiaXposer, a customizable and extensible fairness evaluation service designed to address these challenges. BiaXposer provides a generalized framework and techniques that unifies most existing task-specific bias metrics and supports the use of various fairness idioms. This service enables practitioners to quickly assess and quantify social biases in their models and facilitates the creation and sharing of new bias metrics.
Loading