Abstract: Prior work on human-algorithmic bias has seen difficulty in empirically identifying the underlying mechanisms of bias because in a typical “one-time” decision-making scenario, different mechanisms generate the same patterns of observable decisions. In this study, leveraging a unique repeat decision-making setting in a high-stakes microlending context, we aim to uncover the underlying source, evolution dynamics, and associated impacts of bias. We first develop a structural econometric model of the decision dynamics to understand the source and evolution of bias in human evaluators in microloan granting. We find that both preference-based and belief-based biases exist in human decisions and are in favor of female applicants. Our counterfactual simulations show that the elimination of either of the two biases improves the fairness in financial resource allocation as well as the platform profits. The profit improvement mainly stems from the increased approval probability for male borrowers, especially those who would eventually pay back loans. Furthermore, to examine how human biases evolve when being inherited by machine learning (ML) algorithms, we train state-of-the-art ML algorithms for default risk prediction on both real-world data sets with human biases encoded within and counterfactual data sets with human biases partially or fully removed. We find that even fairness-unaware ML algorithms can reduce bias in human decisions. Interestingly, although removing both types of human bias from the training data can further improve ML fairness, the fairness-enhancing effects vary significantly between new and repeat applicants. Based on our findings, we discuss how to reduce decision bias most effectively in a human-ML pipeline. This paper was accepted by D. J. Wu, Special Issue on the Human-Algorithm Connection. Supplemental Material: The online appendix and data files are available at https://doi.org/10.1287/mnsc.2022.03862.
External IDs:dblp:journals/mansci/HuHLL26
Loading