Grounding and Validation of Algorithmic Recourse in Real-World Contexts: A Systematized Literature Review
Keywords: Algorithmic Recourse, Counterfactual Explanations, Explainable AI, Real-world Systems
TL;DR: We carry out a systematized literature review of algorithmic recourse and find strong disconnects between current approaches to the problem and requirements in realistic applications
Abstract: The aim of algorithmic recourse (AR) is generally understood to be the provision of "actionable" recommendations to individuals affected by algorithmic decision-making systems, in an attempt to offer the capacity for taking actions that may lead to more desirable outcomes in the future. Over the past few years, AR literature has largely focused on theoretical frameworks to generate "actionable" counterfactual explanations that further satisfy various desiderata, such as diversity or robustness. We believe that algorithmic recourse, by its nature, should be seen as a practical problem: real-world socio-technical decision-making systems are complex dynamic entities involving various actors (end users, domain experts, civil servants, system owners, etc.) engaged in social and technical processes. Thus, research needs to account for the specificities of systems where it would be applied. To evaluate how authors envision AR "in the wild", we carry out a systematized review of 127 publications pertaining to the problem and identify the real-world considerations that motivate them. Among others, we look at the ways to make recourse (individually) actionable, the involved stakeholders, the perceived challenges, and the availability of practitioner-friendly open-source codebases. We find that there is a strong disconnect between the existing research and the practical requirements for AR. Most importantly, the grounding and validation of algorithmic recourse in real-world contexts remain underexplored. As an attempt to bridge this gap, we provide other authors with five recommendations to make future solutions easier to adapt to their potential real-world applications.
Primary Area: Interpretability and explainability
Submission Number: 16960
Loading