A Critical Survey on Fairness Benefits of XAI

Published: 27 Oct 2023, Last Modified: 20 Nov 2023NeurIPS XAIA 2023EveryoneRevisionsBibTeX
TL;DR: We analyzed the claims of 175 papers and noticed a misalignment between the claimed fairness desiderata and the actual capabilities of XAI.
Abstract: In this critical survey, we analyze typical claims on the relationship between explainable AI (XAI) and fairness to disentangle the multidimensional relationship between these two concepts. Based on a systematic literature review and a subsequent qualitative content analysis, we identify seven archetypal claims from 175 papers on the alleged fairness benefits of XAI. We present crucial caveats with respect to these claims and provide an entry point for future discussions around the potentials and limitations of XAI for specific fairness desiderata. While the literature often suggests XAI to be an enabler for several fairness desiderata, we notice a divide between these desiderata and the capabilities of XAI. We encourage to conceive XAI as one of many tools to approach the multidimensional, sociotechnical challenge of algorithmic fairness and to be more specific about how exactly what kind of XAI method enables whom to address which fairness desideratum.
Submission Track: Full Paper Track
Application Domain: None of the above / Not applicable
Clarify Domain: Sociotechnical Perspective
Survey Question 1: XAI is seen as a harbinger of hope for several fairness objectives. For example, SHAP and LIME are often applied to assess how a model uses sensitive attributes. Due to the multidimensionality of fairness and technical limitations of many XAI methods, we challenge common claims that are overly optimistic with regard to the potentials of XAI.
Survey Question 2: In high-stakes and fairness-sensitive domains, XAI is hoped to reveal and prevent discrimination. Various XAI methods are commonly employed throughout the AI lifecycle, e.g., for fairness debugging, fairness auditing, or justifications to affected parties.
Survey Question 3: Our survey focuses on XAI in general and includes intrinsic interpretability, data-centric explanations, post-hoc explanations (mostly LIME, SHAP, or PDP), and counterfactual explanations.
Submission Number: 55
Loading