Approximating Probabilistic Explanations via Supermodular MinimizationDownload PDF

Published: 08 May 2023, Last Modified: 26 Jun 2023UAI 2023Readers: Everyone
Keywords: Explainable Artificial Intelligence, Probabilistic Explanations, Submodular Minimization, Approximation Algorithms
TL;DR: In the setting of probabilistic explanations, we propose two greedy approximation algorithms for minimizing explanation errors subject to a cardinality constraint.
Abstract: Explaining in accurate and intelligible terms the predictions made by classifiers is a key challenge of eXplainable Artificial Intelligence (XAI). To this end, an abductive explanation for the predicted label of some data instance is a subset-minimal collection of features such that the restriction of the instance to these features is sufficient to determine the prediction. However, due to cognitive limitations, abductive explanations are often too large to be interpretable. In those cases, we need to reduce the size of abductive explanations, while still determining the predicted label with high probability. In this paper, we show that finding such probabilistic explanations is NP-hard, even for decision trees. In order to circumvent this issue, we investigate the approximability of probabilistic explanations through the lens of supermodularity. We examine both greedy descent and greedy ascent approaches for supermodular minimization, whose approximation guarantees depend on the curvature of the ``unnormalized'' error function that evaluates the precision of the explanation. Based on various experiments for explaining decision tree predictions, we show that our greedy algorithms provide an efficient alternative to the state-of-the-art constraint optimization method.
Supplementary Material: pdf
Other Supplementary Material: zip
0 Replies

Loading