Advancing Nearest Neighbor Explanation-by-Example with Critical Classification RegionsDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: Explainable AI, Post-hoc Nearest Neighbor Explanation-by-Example, User Study, Case-based Reasoning, Convolutional Neural Network
Abstract: There is an increasing body of evidence suggesting that post-hoc explanation-by- example with nearest neighbors is a promising solution for the eXplainable Artificial Intelligence (XAI) problem. However, despite being thoroughly researched for decades, such post-hoc methods have never seriously explored how to enhance these explanations by highlighting specific important "parts" in a classification. Here, we propose the notion of Critical Classification Regions (CCRs) to do this, and several possible methods are experimentally compared to determine the best approach for this explanation strategy. CCRs supplement nearest neighbor examples by highlighting similar important "parts" in the image explanation. Experiments across multiple domains show that CCRs represent key features used by the CNN in both the testing and training data. Finally, a suitably-controlled user study (N=163) on ImageNet, shows CCRs improve people’s assessments of the correctness of a CNN’s predictions for difficult classifications due to ambiguity.
One-sentence Summary: We show how to identify important "parts" of images in post-hoc nearest neighbor explanation-by-example, before testing it both computationally and in a user study.
Supplementary Material: zip
11 Replies

Loading