How (Un)Faithful is Attention?

Published: 01 Jan 2022, Last Modified: 14 Jun 2024BlackboxNLP@EMNLP 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Although attention weights have been commonly used as a means to provide explanations for deep learning models, the approach has been widely criticized due to its lack of faithfulness. In this work, we present a simple approach to compute the newly proposed metric AtteFa, which can quantitatively represent the degree of faithfulness of the attention weights. Using this metric, we further validate the effect of the frequency of informative input elements and the use of contextual vs. non-contextual encoders on the faithfulness of the attention mechanism. Finally, we apply the approach on several real-life binary classification datasets to measure the faithfulness of attention weights in real-life settings.
Loading