Explaining Multiclass Classifiers with Categorical Values: A Case Study in RadiographyDownload PDF

Published: 07 Mar 2023, Last Modified: 04 Apr 2023ICLR 2023 Workshop TML4H OralReaders: Everyone
Keywords: XAI, ML healthcare, trustworthy ML, Pneumonia classifier
TL;DR: We develop the Categorical Shaply value to explain outputs of multi-class classifiers and apply them to a pneumonia type detector.
Abstract: Explainability of machine learning methods is of fundamental importance in healthcare to calibrate trust. A large branch of explainable machine learning uses tools linked to the Shapley value, which have nonetheless been found difficult to interpret and potentially misleading. Taking multiclass classification as a refer- ence task, we argue that a critical issue in these methods is that they disregard the structure of the model outputs. We develop the Categorical Shapley value as a theoretically-grounded method to explain the output of multiclass classifiers, in terms of transition (or flipping) probabilities across classes. We demonstrate on a case study composed of three example scenarios for pneumonia detection and subtyping using X-ray images.
3 Replies