Understanding the Size of the Feature Importance Disagreement Problem in Real-World Data

Published: 20 Jun 2023, Last Modified: 19 Jul 2023IMLH 2023 PosterEveryoneRevisionsBibTeX
Keywords: Explainable AI, Evaluating explanations, Variable importance, Shapley values, Permutation FI, LOCO
TL;DR: We present a novel evaluation framework to measure the influence of different elements of data complexity on the size of the disagreement problem by modifying real-world data.
Abstract: Feature importance can be used to gain insight in prediction models. However, different feature importance methods might result in different generated explanations, which has recently been coined as the explanation disagreement problem. Little is known about the size of the disagreement problem in real-world data. Such disagreements are harmful in practice as conflicting explanations only make prediction models less transparent to endusers, which contradicts the main goal of these methods. Hence, it is important to empirically analyze and understand the feature importance disagreement problem in real-world data. We present a novel evaluation framework to measure the influence of different elements of data complexity on the size of the disagreement problem by modifying real-world data. We investigate the feature importance disagreement problem in two datasets from the Dutch general practitioners database IPCI and two open-source datasets.
Submission Number: 27
Loading