Keywords: Interpretability, XAI, example-based explanation, human evaluation; human study; nearest neighbors, computer vision, image classification
TL;DR: We improve the accuracy for image classifiers and human users via a novel set of training-set nearest-neighbors.
Abstract: Nearest neighbors (NN) have traditionally been used both for making final decisions—such as in Support Vector Machines or $k$-NN classifiers—and for providing users with explanations of a model's decisions.
In this paper, we introduce a novel set of nearest neighbors to enhance the predictions of a frozen, pretrained image classifier $C$, thereby integrating performance improvement with explainability.
We leverage an image comparator $S$ that (1) compares the input image with NN images from the top-$K$ **most probable** classes given by $C$; and (2) uses the similarity scores from $S$ to weight and refine the confidence scores of $C$.
Our method not only consistently improves fine-grained image classification accuracy of $C$ on datasets such as CUB-(Birds)-200, Cars-196, and Dogs-120 but also enhances the human interpretability of the model's decisions.
Through human studies conducted on CUB-200 and Dogs-120 datasets, we demonstrate that presenting users with relevant examples from multiple probable classes help users gain better insight into the model's reasoning process, which improves their decision accuracy compared to prior methods that visualize only the top-1 class training examples.
Track: Published paper track
Submitted Paper: No
Published Paper: Yes
Published Venue: Transactions on Machine Learning Research
Submission Number: 43
Loading