The Disagreement Problem in Faithfulness Metrics

Published: 27 Oct 2023, Last Modified: 22 Nov 2023NeurIPS XAIA 2023EveryoneRevisionsBibTeX
TL;DR: We evaluate existing faithfulness metrics for XAI methods and find that they do not agree with each other.
Abstract: The field of explainable artificial intelligence (XAI) aims to explain how black-box machine learning models work. Much of the work centers around the holy grail of providing post-hoc feature attributions to any model architecture. While the pace of innovation around novel methods has slowed down, the question remains of how to choose a method, and how to make it fit for purpose. Recently, efforts around benchmarking XAI methods have suggested metrics for that purpose—but there are many choices. That bounty of choice still leaves an end user unclear on how to proceed. This paper focuses on comparing metrics with the aim of measuring faithfulness of local explanations on tabular classification problems—and shows that the current metrics don’t agree; leaving users unsure how to choose the most faithful explanations.
Submission Track: Full Paper Track
Application Domain: None of the above / Not applicable
Clarify Domain: Evaluation of explanations for tabular data
Survey Question 1: We look at metrics that have been used to measure how faithful explanation methods are to the original model, and find them wanting. This is a problem since on top of the difficulties of selecting explanation methods, now we also have to worry about selecting metrics for those methods.
Survey Question 2: N/A. We are looking at how to measure the quality of explainability methods, so we have no choice but to use explainability methods.
Survey Question 3: KernelSHAP, Deep SHAP, integrated gradients
Submission Number: 49
Loading