Fairness Artificial Intelligence in Clinical Decision Support: Mitigating Effect of Health Disparity

Published: 01 Jan 2025, Last Modified: 17 May 2025IEEE J. Biomed. Health Informatics 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The United States, as well as the global community, experiences health disparities among socially disadvantaged populations. These disparities often manifest in the data utilized for AI model training. Without appropriate de-biasing strategies, models trained to optimize predictive performance may inadvertently capture and perpetuate these inherent biases. The utilization of biased models in clinical decision-making can inflict harm upon patients from disadvantaged groups and exacerbate disparities when these decisions are documented and employed to train subsequent AI models. Unlike conventional correlation-based methods, we aim to mitigate the negative impacts of health disparity by answering a causal inference question for fairness: would the clinical decision support system make a different decision if the patient had a different sensitive attribute (e.g., race)? Recognizing the high computational complexity of developing causal models, we propose a flexible and efficient causal-model-free algorithm, CFReg, which provides causal fairness for supervised machine learning models. In addition, CFReg also develops a novel evaluation metric to quantify fairness within clinical settings. We first validate CFReg using a healthcare dataset of 48,784 patients focused on care management, then generalize to another four benchmark datasets with racial and ethnic disparity, including law school admission, adult income, criminal recidivism, and violent crime prediction. Experimental results demonstrate that CFReg outperforms baseline approaches in both fairness and accuracy, achieving a good trade-off between model fairness and supervised classification performance.
Loading