A New 3D Image Block Ranking Method Using Axial, Coronal and Sagittal Image Patch Rankings for Explainable Medical Imaging
Keywords: convolutional neural networks, feature selection, gradcam, medical imaging, disease diagnosis, image classification
Abstract: Although a 3D Convolutional Neural Network (CNN) has been applied to explainable
medical imaging in recent years, understanding the relationships among input
2D image patches, input 3D image blocks, extracted feature maps, top-ranked
features, heatmaps, and final diagnosis remains a significant challenge. To help
address this important challenge, firstly, we create a new 2D Grad-CAM-based
method using feature selection to produce explainable 2D heatmaps with a small
number of highlighted image patches corresponding to top-ranked features. Secondly,
we design a new 2D image patch ranking algorithm that leverages the newly
defined feature matrices and relevant statistical data from numerous heatmaps to
reliably rank axial patches, coronal patches, and sagittal patches. Thirdly, we create
a novel 3D image block ranking algorithm to generate a “Block Ranking Map
(BRM)” by using the axial patch ranking scores, coronal patch ranking scores, and
sagittal patch ranking scores. Lastly, we develop a hybrid 3D image block ranking
algorithm to generate a reliable hybrid BRM by using different block ranking
scores generated by the 3D image block ranking algorithm using different top feature
sets. The associations between brain areas and a brain disease are reliably
generated by using hybrid information from ChatGPT and relevant publications.
The simulation results using two different 3D data sets indicate that the novel hybrid
3D image block ranking algorithm can identify top-ranked blocks associated
with important brain areas related to AD diagnosis and autism diagnosis. A doctor
may conveniently use the hybrid BRM with axial, coronal, and sagittal views
to better understand the relationship between the top-ranked blocks and medical
diagnosis, and then can efficiently and effectively make a rational and explainable
medical diagnosis.
Primary Area: interpretability and explainable AI
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8513
Loading