NEMt: Fast Targeted Explanations for Medical Image Models via Neural Explanation Masks

Published: 06 Nov 2024, Last Modified: 06 Jan 2025NLDL 2025 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: XAI, MIA
Abstract: A fundamental barrier to the adoption of AI systems in clinical practice is the insufficient transparency of AI decision-making. The field of Explainable Artificial Intelligence (XAI) seeks to provide human-interpretable explanations for a given AI model. The recently proposed Neural Explanation Mask (NEM) framework is the first XAI method to explain learned representations with high accuracy at real-time speed. NEM transforms a given differentiable model into a self-explaining system by augmenting it with a neural network-based explanation module. This module is trained in an unsupervised manner to output occlusion-based explanations for the original model. However, the current framework does not consider labels associated with the inputs. This makes it unsuitable for many important tasks in the medical domain that require explanations specific to particular output dimensions, such as pathology discovery, disease severity regression, and multi-label data classification. In this work, we address this issue by introducing a loss function for training explanation modules incorporating labels. It steers explanations toward target labels alongside an integrated smoothing operator, which reduces artifacts in the explanation masks. We validate the resulting Neural Explanation Masks with target labels (NEMt) framework on public databases of lung radiographs and skin images. The obtained results are superior to the state-of-the-art XAI methods in terms of explanation relevancy mass, complexity, and sparseness. Moreover, the explanation generation is several hundred times faster, allowing for real-time clinical applications. The code is publicly available at https://github.com/baerminator/NEM_T
Submission Number: 17
Loading