Revisiting Attention Weights as Explanations from an Information Theoretic PerspectiveDownload PDF

Published: 21 Oct 2022, Last Modified: 05 May 2023Attention Workshop, NeurIPS 2022 PosterReaders: Everyone
Keywords: attention mechanism, interpretability, information bottleneck theory
TL;DR: This work evaluates the interpretability of attention weights and the results show that attention weights have the potential to be used as model's explanation.
Abstract: Attention mechanisms have recently demonstrated impressive performance on a range of NLP tasks, and attention scores are often used as a proxy for model explainability. However, there is a debate on whether attention weights can, in fact, be used to identify the most important inputs to a model. We approach this question from an information theoretic perspective by measuring the mutual information between the model output and the hidden states. From extensive experiments, we draw the following conclusions: (i) Additive and Deep attention mechanisms are likely to be better at preserving the information between the hidden states and the model output (compared to Scaled Dot-product); (ii) ablation studies indicate that Additive attention can actively learn to explain the importance of its input hidden representations; (iii) when attention values are nearly the same, the rank order of attention values is not consistent with the rank order of the mutual information (iv) Using Gumbel-Softmax with a temperature lower than one, tends to produce a more skewed attention score distribution compared to softmax and hence is a better choice for explainable design; (v) some building blocks are better at preserving the correlation between the ordered list of mutual information and attention weights order (for eg. the combination of BiLSTM encoder and Additive attention). Our findings indicate that attention mechanisms do have the potential to function as a shortcut to model explanations when they are carefully combined with other model elements.
Financial-aid: Yes, I need financial assistance to present in-person at the workshop.
0 Replies

Loading