Quantile-Conditioned Fairness: Extending Binary Fairness Evaluation to Continuous Outcomes

Published: 11 Nov 2025, Last Modified: 16 Jan 2026DAI PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Bias and Fairness, Traditional Machine Learning, Fairness Evaluation
TLDR: A technique to evaluate fairness for regression models
Abstract: Bias and fairness remain persistent challenges in the responsible deployment of machine learning systems. While most existing metrics are designed for binary classification, fairness evaluation for regression models, widely used in domains such as risk scoring, pricing, and demand forecasting, remains comparatively underexplored. We introduce a quantile-conditioned fairness framework for regression that extends conditional fairness assessment from binary to continuous outcomes. The proposed method partitions target values into quantiles, computes group-to-complement prediction ratios within each segment, and then aggregates these ratios to produce interpretable fairness scores. Through a series of controlled ablation studies on synthetic data, we analyze the effects of bias strength, protected group imbalance, and model performance. We also benchmark our solution against the open-source Dalex fairness toolkit. We further show that the same conditioning principle naturally extends to multiclass classification, treating each class as a conditioning bucket. Real-world case studies on regression and classification datasets demonstrate the practical utility of our approach. Our implementation is lightweight and easily integrable into existing model development workflows, providing a deployable framework for fairness evaluation for all domains.
Submission Number: 11
Loading