Keywords: Algorithmic fairness, fairness measures, machine learning, artificial intelligence
TLDR: Why existing fairness measures are insufficient yet
Abstract: Increasing reliance on artificial intelligence and machine learning in high-stakes domains raises acute concerns about discrimination. However, existing fairness metrics remain constrained with limited applicability, especially when facing complex real-world scenarios beyond a single protected attribute. In this work, we take widely used group fairness measures as examples and comprehensively analyse their effectiveness in comparison with individual fairness. The empirical results illustrate that: (1) discrimination would be underestimated via binarisation; (2) traversal-based generalisation incurs computational cost; and (3) intersectional attributes remain superficially addressed.
Submission Number: 6
Loading