Bounded logit attention: Learning to explain image classifiersDownload PDF

Published: 21 Oct 2022, Last Modified: 12 Mar 2024Attention Workshop, NeurIPS 2022 PosterReaders: Everyone
Keywords: Explainable artificial intelligence, self-learned explainability, convolutional neural networks, image classification, feature selection, beta activation function
TL;DR: We present a trainable self-explanation module for convolutional neural networks based on an attention mechanism using a novel type of activation function.
Abstract: Explainable artificial intelligence is the attempt to elucidate the workings of systems too complex to be directly accessible to human cognition through suitable sideinformation referred to as “explanations”. We present a trainable explanation module for convolutional image classifiers we call bounded logit attention (BLA). The BLA module learns to select a subset of the convolutional feature map for each input instance, which then serves as an explanation for the classifier’s prediction. BLA overcomes several limitations of the instancewise feature selection method “learning to explain” (L2X) introduced by Chen et al. (2018): 1) BLA scales to real-world sized image classification problems, and 2) BLA offers a canonical way to learn explanations of variable size. Due to its modularity BLA lends itself to transfer learning setups and can also be employed as a post-hoc add-on to trained classifiers. Beyond explainability, BLA may serve as a general purpose method for differentiable approximation of subset selection. In a user study we find that BLA explanations are preferred over explanations generated by the popular (Grad-)CAM method (Zhou et al., 2016; Selvaraju et al., 2017).
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2105.14824/code)
0 Replies

Loading