Interpreting deep neural networks for medical imaging using concept graphs

Published: 02 Feb 2021, Last Modified: 02 Sept 2024International Workshop on Health Intelligence, AAAI 2021EveryoneCC BY 4.0
Abstract: The black-box nature of deep learning models prevents them from being completely trusted in domains like biomedicine. Most explainability techniques do not capture the concept-based reasoning that human beings follow. In this work, we attempt to understand the behavior of trained models that perform image processing tasks in the medical domain by building a graphical representation of the concepts they learn. Extracting such a graphical representation of the model’s behavior on an abstract, higher conceptual level would help us to unravel the steps taken by the model for predictions. We show the application of our proposed implementation on two biomedical problems- brain tumor segmentation and fundus image classification. We provide an alternative graphical representation of the model by formulating a concept level graph as discussed above, and f ind active inference trails in the model. We work with radiologists and ophthalmologists to understand the obtained inference trails from a medical perspective and show that medically relevant concept trails are obtained which highlight the hierarchy of the decision-making process followed by the model. Our framework is available at https://github.com/koriavinash1/ BioExp.
Loading