Policy Comparison Under Unmeasured Confounding

Published: 27 Oct 2023, Last Modified: 12 Dec 2023RegML 2023EveryoneRevisionsBibTeX
Keywords: algorithmic decision support, model evaluation, efficacy, functionality, confounding
TL;DR: We develop an approach for characterizing and reducing uncertainty in decision-making policy comparisons in the presence of confounding.
Abstract: Predictive models are often introduced under the rationale that they improve performance over an existing decision-making policy. However, it is challenging to directly compare an algorithm against a status quo policy due to uncertainty introduced by confounding and selection bias. In this work, we develop a regret estimator which evaluates differences in classification metrics across decision-making policies under confounding. Theoretical and experimental results demonstrate that our regret estimator yields tighter regret bounds than existing auditing frameworks designed to evaluate predictive models under confounding. Further, we show that our regret estimator can be combined with a flexible set of causal identification strategies to yield informative and well-justified policy comparisons. Our experimental results also illustrate how confounding and selection bias contribute to uncertainty in subgroup-level policy comparisons. We hope that our auditing framework will support the operationalization of regulatory frameworks calling for more direct assessments of predictive model efficacy.
Submission Number: 53
Loading