Assessing machine learning fairness with multiple contextual norms

Published: 15 Oct 2025, Last Modified: 31 Oct 2025BNAIC/BeNeLearn 2025 OralEveryoneRevisionsBibTeXCC BY 4.0
Track: Type A (Regular Papers)
Keywords: algorithmic fairness, socio-technical systems, fairness assessment, machine learning
Abstract: The increasing societal impact of decisions made with machine learning (ML) systems requires these decisions to be fair. To fully address the fairness of an ML system, fairness must be interpreted as context-dependent and shaped by the societal environment in which this ML system operates. However, existing group fairness measures do not consider this context, reducing fairness to equality between groups, neglecting other norms such as equity or need. Therefore, we propose contextual fairness, a context-dependent framework for assessing ML fairness with multiple context-specific norms such as equality, equity, and need. Our approach involves four steps: eliciting fairness norms from the relevant stakeholders, operationalizing these norms into quantitative measures, combining these measures into a weighted contextual fairness score, and analyzing this score at global, between-group, and within-group levels. We evaluate our approach on the ACS Income dataset for both binary classification and regression tasks, showing how contextual fairness improves the interpretation of fairness assessment and mitigation in context. By explicitly incorporating norms in fairness assessment, our approach enables more nuanced evaluations and better supports practitioners and regulators in considering the societal context.
Serve As Reviewer: ~Monowar_Bhuyan1
Submission Number: 79
Loading