ObEy: Quantifiable Object-based Explainability without Ground-Truth Annotations

Published: 27 Oct 2023, Last Modified: 25 Nov 2023NeurIPS XAIA 2023EveryoneRevisionsBibTeX
TL;DR: We introduce a new object-based metric to quantify the explainability of neural networks and use a foundation model to circumvent the lack of annotations in the wild.
Abstract: Neural networks are at the core of AI systems recently observing accelerated adoption in high-stakes environments. Consequently, understanding their black-box predictive behavior is paramount. Current explainable AI techniques, however, are limited to explaining a single prediction, rather than characterizing the inherent ability of the model to be explained, reducing their usefulness to manual inspection of samples. In this work, we offer a conceptual distinction between explanation methods and explainability. We use this motivation to propose Object-based Explainability (ObEy), a novel model explainability metric that collectively assesses model-produced saliency maps relative to objects in images, inspired by humans’ perception of scenes. To render ObEy independent of the prediction task, we use full-image instance segmentations obtained from a foundation model, making the metric applicable on existing models in any setting. We demonstrate ObEy’s immediate applicability to use cases in model inspection and comparison. As a result, we present new insights into the explainability of adversarially trained models from a quantitative perspective.
Submission Track: Full Paper Track
Application Domain: Computer Vision
Survey Question 1: Today's explainability methods in computer vision mostly consist of methods that produce explanations for single predictions from a model after it has been trained, limiting their usefulness to manually inspecting these explanations. Moving away from the level of single predictions, we introduce a metric that quantifies the inherent explainability of the model - the degree to which it can produce good explanations. We do so by using a foundation model to generate object annotations of images that are not available in practice and utilizing them as targets for sound explanations on a reference dataset.
Survey Question 2: Current deep learning models are often developed without regard to their explainability. This is because there exists a lack of methods to quantify their explainability, which is our motivation to propose a new metric for this purpose. This enables researchers and practitioners to analyze and compare models quantitatively in terms of their explainability, allowing a better understanding of different model conditions, architectures, and training techniques.
Survey Question 3: We use Grad-CAM to construct our own proposed metric: Object-based Explainability (ObEy).
Submission Number: 78
Loading