Explainable AI for computational pathology identifies model limitations and tissue biomarkers

Published: 12 Oct 2024, Last Modified: 11 Nov 2024GenAI4Health PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: computational pathology, explainable AI, attention-based multiple instance learning, ABMIL, attention, weakly-supervised learning, counterfactual, counterfactual examples, metastasis detection, prognostic models, survival
TL;DR: We introduce an explainable AI method for computational pathology based on counterfactuals.
Abstract: Deep learning models have shown promise in histopathology image analysis, but their opaque decision-making process poses challenges in high-risk medical scenarios. Here we introduce HIPPO, an explainable AI method that interrogates attention-based multiple instance learning (ABMIL) models in computational pathology by generating counterfactual examples through tissue patch modifications in whole slide images. Applying HIPPO to ABMIL models trained to detect breast cancer metastasis reveals that they may overlook small tumors and can be misled by non-tumor tissue, while attention maps — widely used for interpretation — often highlight regions that do not directly influence predictions. We also used HIPPO with prognostic models to identify prognostic tissue regions and to experiment with interventions that affect prognosis. These findings demonstrate HIPPO's capacity for comprehensive model evaluation, bias detection, and quantitative hypothesis testing. HIPPO greatly expands the capabilities of explainable AI tools to assess the trustworthy and reliable development, deployment, and regulation of weakly-supervised models in computational pathology.
Submission Number: 21
Loading