On Evaluating Explainability Algorithms

Sep 25, 2019 Blind Submission readers: everyone Show Bibtex
  • TL;DR: We propose a suite of metrics that capture desired properties of explainability algorithms and use it to objectively compare and evaluate such methods
  • Abstract: A plethora of methods attempting to explain predictions of black-box models have been proposed by the Explainable Artificial Intelligence (XAI) community. Yet, measuring the quality of the generated explanations is largely unexplored, making quantitative comparisons non-trivial. In this work, we propose a suite of multifaceted metrics that enables us to objectively compare explainers based on the correctness, consistency, as well as the confidence of the generated explanations. These metrics are computationally inexpensive, do not require model-retraining and can be used across different data modalities. We evaluate them on common explainers such as Grad-CAM, SmoothGrad, LIME and Integrated Gradients. Our experiments show that the proposed metrics reflect qualitative observations reported in earlier works.
  • Keywords: interpretability, Deep Learning
  • Original Pdf:  pdf
0 Replies

Loading