Inherently Interpretable Multi-Label Classification Using Class-Specific CounterfactualsDownload PDF

Published: 04 Apr 2023, Last Modified: 25 Jun 2024MIDL 2023 PosterReaders: Everyone
Keywords: Interpretable Machine Learning, Visual Feature Attribution, Multi-label Classification.
TL;DR: A novel inherently interpretable model for multi-label classification
Abstract: Interpretability is essential for machine learning algorithms in high-stakes application fields such as medical image analysis. However, high-performing black-box neural networks do not provide explanations for their predictions, which can lead to mistrust and suboptimal human-ML collaboration. Post-hoc explanation techniques, which are widely used in practice, have been shown to suffer from severe conceptual problems. Furthermore, as we show in this paper, current explanation techniques do not perform adequately in the multi-label scenario, in which multiple medical findings may co-occur in a single image. We propose Attri-Net, an inherently interpretable model for multi-label classification. Attri-Net is a powerful classifier that provides transparent, trustworthy, and human-understandable explanations. The model first generates class-specific attribution maps based on counterfactuals to identify which image regions correspond to certain medical findings. Then a simple logistic regression classifier is used to make predictions based solely on these attribution maps. We compare Attri-Net to five post-hoc explanation techniques and one inherently interpretable classifier on three chest X-ray datasets. We find that Attri-Net produces high-quality multi-label explanations consistent with clinical knowledge and has comparable classification performance to state-of-the-art classification models.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](
4 Replies