Generalizable and explainable deep learning for medical image computing: An overview

Published: 28 Feb 2025, Last Modified: 12 Feb 2025Current Opinion in Biomedical EngineeringEveryoneCC BY 4.0
Abstract: Objective This paper presents an overview of generalizable and explainable artificial intelligence (XAI) in deep learning (DL) for medical imaging, with the aim of addressing the urgent need for transparency and explainability in clinical applications. Methodology We propose to use four CNNs in three medical datasets (brain tumor, skin cancer, and chest x-ray) for medical image classification tasks. Furthermore, we combine ResNet50 with five common XAI techniques to obtain explainable results for model prediction, in order to improve model transparency. We also involve a quantitative metric (confidence increase) to evaluate the usefulness of XAI techniques. Key findings The experimental results indicate that ResNet50 can achieve feasible accuracy and F1 score in all datasets (e.g., 86.31 % accuracy in skin cancer). Furthermore, the findings show that while certain XAI methods, such as eXplanation with Gradient-weighted Class activation mapping (XgradCAM), effectively highlight relevant abnormal regions in medical images, others, such as EigenGradCAM, may perform less effectively in specific scenarios. In addition, XgradCAM indicates higher confidence increase (e.g., 0.12 in glioma tumor) compared to GradCAM++ (0.09) and LayerCAM (0.08). Implications Based on the experimental results and recent advancements, we outline future research directions to enhance the generalizability of DL models in the field of biomedical imaging.
Loading