Additive MIL: Intrinsically Interpretable Multiple Instance Learning for PathologyDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 21 Dec 2022, 03:18NeurIPS 2022 AcceptReaders: Everyone
Keywords: Interpretability, Explainability, Multiple Instance Learning, Medical Imaging, Digital Pathology, Histopathology, Saliency, Additive Models, Shapley Values, Explainable AI
TL;DR: An additive reformulation of multiple instance learning (MIL) models that provides intrinsic interpretability with applications in pathology.
Abstract: Multiple Instance Learning (MIL) has been widely applied in pathology towards solving critical problems such as automating cancer diagnosis and grading, predicting patient prognosis, and therapy response. Deploying these models in a clinical setting requires careful inspection of these black boxes during development and deployment to identify failures and maintain physician trust. In this work, we propose a simple formulation of MIL models, which enables interpretability while maintaining similar predictive performance. Our Additive MIL models enable spatial credit assignment such that the contribution of each region in the image can be exactly computed and visualized. We show that our spatial credit assignment coincides with regions used by pathologists during diagnosis and improves upon classical attention heatmaps from attention MIL models. We show that any existing MIL model can be made additive with a simple change in function composition. We also show how these models can debug model failures, identify spurious features, and highlight class-wise regions of interest, enabling their use in high-stakes environments such as clinical decision-making.
Supplementary Material: zip
10 Replies

Loading