COSE: A Consistency-Sensitivity Metric for Saliency on Image Classification

Published: 31 Jul 2023, Last Modified: 17 Aug 2023VIPriors 2023 OralPosterTBDEveryoneRevisionsBibTeX
Keywords: saliency maps, saliency metrics, saliency method evaluation, model explanations, image classification explanations
TL;DR: We propose a metric (COSE) that effectively evaluates saliency methods by quantifying invariance and equivariance of saliency maps using data and model augmentations
Abstract: We present a set of metrics that utilize vision priors to effectively assess the performance of saliency methods on image classification tasks. To understand behavior in deep learning models, many saliency methods provide visual saliency maps emphasizing image regions that most contribute to a model prediction. However, there is limited work on analyzing the reliability of saliency methods in explaining model decisions. We propose the metric COnsistency-SEnsitivity (COSE) that quantifies the equivariant and invariant properties of visual model explanations using simple data augmentations. Through our metrics, we show that although saliency methods are thought to be architecture-independent, most methods could better explain transformer-based models over convolutional-based models. In addition, GradCAM was found to outperform other methods in terms of COSE but was shown to have limitations such as variability for fine-grained datasets. The duality between consistency and sensitivity allow the analysis of saliency methods from different angles. Ultimately, we find that it is important to balance these two metrics for a saliency map to faithfully show model behavior.
Supplementary Material: pdf
Submission Number: 9
Loading