Measuring the Impact of Equal Treatment as Blindness via Distributions of Explanations Disparities

TMLR Paper4169 Authors

09 Feb 2025 (modified: 13 Feb 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Liberal political philosophy advocates for the policy of \emph{equal treatment as blindness}, which seeks to achieve fairness by treating individuals without considering their protected characteristics directly. However, this policy has faced longstanding criticism for perpetuating existing inequalities. In machine learning, this policy can be translated into the concept of \emph{fairness as unawareness}, and be measured using disparate treatment metrics such as Demographic Parity (a.k.a. Statistical Parity). Our analysis reveals that Demographic Parity does not faithfully measure whether individuals are being treated independently of the protected attribute by the model. We introduce the Explanation Disparity metric to measure fairness under \emph{equal treatment as blindness} policy. Our metric evaluates the fairness of predictive models by analyzing the extent to which the protected attribute can be inferred from the distribution of explanation values, specifically using Shapley values. The proposed metric tests for statistical independence of the explanation distributions over populations with different protected characteristics. We show the theoretical properties of "Explanation Disparity" and devise an equal treatment inspector based on the AUC of a Classifier Two-Sample Test. We experiment with synthetic and natural data to demonstrate and compare the notion with related ones. We release \href{https://anonymous.4open.science/r/explanationspace-B4B1/README.md}{\texttt{explanationspace}}, an open-source Python package with methods and tutorials.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Dennis_Wei1
Submission Number: 4169
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview