Revisiting Sanity Checks for Saliency MapsDownload PDF

Published: 17 Oct 2021, Last Modified: 05 May 2023XAI 4 Debugging Workshop @ NEURIPS 2021 PosterReaders: Everyone
Keywords: saliency methods, debugging, guided backprop, interpretability, explainability, computer vision
TL;DR: We revisit the "sanity checks for saliency maps" methodology of Adebayo et al [Neurips 2018], arguing that their conclusions do not follow from their empirical findings due to a form of confounding that may be inherent to the tasks they evaluated on.
Abstract: Saliency methods are a popular approach for model debugging and explainability. However, in the absence of ground-truth data for what the correct maps should be, evaluating and comparing different approaches remains a long-standing challenge. The sanity checks methodology of Adebayo et al [Neurips 2018] has sought to address this challenge. They argue that some popular saliency methods should not be used for explainability purposes since the maps they produce are not sensitive to the underlying model that is to be explained. Through a causal re-framing of their objective, we argue that their empirical evaluation does not fully establish these conclusions, due to a form of confounding introduced by the tasks they evaluate on. Through various experiments on simple custom tasks we demonstrate that some of their conclusions may indeed be artifacts of the tasks more than a criticism of the saliency methods themselves. More broadly, our work challenges the utility of the sanity check methodology, and further highlights that saliency map evaluation beyond ad-hoc visual examination remains a fundamental challenge.
0 Replies

Loading