Evaluating Prediction-based Interventions with Human Decision Makers In Mind
TL;DR: Cognitive biases of human decision makers induced by the experimental design may significantly alter the observed effect sizes of the algorithmic intervention.
Abstract: Automated decision systems (ADS) are broadly deployed to inform or support human decision- making across a wide range of consequential contexts. However, various context-specific details complicate the goal of establishing meaningful experimental evaluations for prediction-based interventions. Notably, specific experimental design decisions may induce cognitive biases in human decision makers, which could then significantly alter the observed effect sizes of the prediction intervention. In this paper, we formalize and investigate various models of human decision-making in the presence of a predictive model aid. We show that each of these behavioral models produces dependencies across decision subjects and results in the violation of existing assumptions, with consequences for treatment effect estimation. This work aims to further advance the scientific validity of intervention-based evaluation schemes for the assessment of ADS deployments.
Submission Number: 402
Loading