Prediction without Preclusion: Recourse Verification with Reachable Sets

ICML 2023 Workshop SCIS Submission57 Authors

Published: 20 Jun 2023, Last Modified: 28 Jul 2023SCIS 2023 PosterEveryoneRevisions
Keywords: recourse, fairness, robustness
TL;DR: Models may assign predictions that are invariant to our actions. We develop methods to test for this effect and show how they can be used to ensure access is lending and robustness in content moderation.
Abstract: Machine learning models are now used to decide who will receive a loan, a job interview, or a public service. Standard techniques to build these models use features that characterize people but overlook their \emph{actionability}. In domains like lending and hiring, models can assign predictions that are *fixed* – meaning that consumers denied loans and interviews are *precluded from access* to credit and employment. In this work, we introduce a formal testing procedure to flag models that assign fixed predictions called *recourse verification*. We develop machinery to reliably test the feasibility of recourse *for any model* under user-specified actionability constraints. We demonstrate how these tools can ensure recourse and adversarial robustness and use them to study the infeasibility of recourse in real-world lending datasets. Our results highlight how models can inadvertently assign fixed predictions that preclude access and motivate the need to design algorithms that account for actionability when developing models and providing recourse.
Submission Number: 57
Loading