INFORMER- Interpretability Founded Monitoring of Medical Image Deep Learning

04 Aug 2024 (modified: 01 Sept 2024)MICCAI 2024 Workshop UNSURE SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Interpretability, Quality control, multi-label classification, medical images, deep learning
Abstract: Deep learning models have gained significant attention due to their promising performance in medical image tasks. However, a gap remains between experimental accuracy and real-world applications. The inherited black-box nature of the deep learning model introduces uncertainty, trustworthy issues, and difficulties in performing quality control of deployed deep learning models. While quality control methods focusing on uncertainty estimation for segmentation tasks exist, there are comparatively fewer approaches for classification, particularly in multi-label datasets. This paper addresses this gap by proposing a quality control method that bridges interpretability and uncertainty estimation through a graph-based class distinctiveness calculation. Using the CheXpert dataset, the proposed approach achieved a higher F_1 score on the bootstrapped test set compared to baselines quality control approaches based on predictive entropy and test-time augmentation.
Supplementary Material: pdf
Submission Number: 20
Loading