Deep Explainable Ensemble for Medical Image Analysis

Published: 01 Jan 2024, Last Modified: 07 Sept 2025TENCON 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Traditionally, medical images have been analyzed manually. However, this manual analysis can be subjective, depending on the judgment of radiologists and pathologists, which can result in inconsistency and inefficiency. Computer vision and deep learning models have revolutionized the study and diagnosis of medical images such as X-rays, CT scans, and MRIs. Despite their effectiveness, these models often act as black boxes and are difficult to interpret. In this study, we propose an ensemble of two popular deep learning models, ResNet101 and EfficientNetB7, which not only provide higher classification accuracy for CT scans but also offer superior interpretability. Our ensemble model outperforms the original models in COVID-19 CT scan classification by more than 2%. We compared the performance of three eXplainable AI (XAI) visualization models including GradCAM, Guided-GradCAM, and LIME, against the ground truth data labeled by radiologists. The proposed ensemble method provides better interpretability for the CT scan images compared to the individual models.
Loading