Exploring the Role of Explainability for Uncovering Bias in Deep Learning-based Medical Image AnalysisDownload PDF

Published: 28 Apr 2023, Last Modified: 30 May 2023MIDL 2023 Short paper track PosterReaders: Everyone
Keywords: Fairness, bias, explainability
TL;DR: This paper explores the potential use of XAI methods to identify biases in medical images used in deep learning, as saliency maps showed demographic subgroup-specific differences in regions associated with a known confounder for the prediction task.
Abstract: Fairness and bias are critical considerations for the effective and ethical use of deep learning models for medical image analysis. Despite this, there has been minimal research on how explainable artificial intelligence (XAI) methods can be leveraged to better understand underlying causes of bias in medical image data. To study this, we trained a convolutional neural network on brain magnetic resonance imaging (MRI) data of 4547 adolescents to predict biological sex. Performance disparities between White and Black racial subgroups were analyzed, and average saliency maps were generated for each subgroup based on sex and race. The model showed significantly higher performance in correctly classifying White males compared to Black males, and slightly higher performance for Black females compared to White females. Saliency maps indicated subgroup-specific differences in brain regions associated with pubertal development, an established confounder in this task, which is also associated with race. These findings suggest that models demonstrating performance disparities can also lead to varying XAI outcomes across subgroups, offering insights into potential sources of bias in medical image data.
2 Replies