Explaining Information Flow Inside Vision Transformers Using Markov ChainDownload PDF

Published: 17 Oct 2021, Last Modified: 05 May 2023XAI 4 Debugging Workshop @ NEURIPS 2021 OralReaders: Everyone
Keywords: Transformer, Model interpretability, Markov chain
TL;DR: We propose "Transition Attention Maps" to explain the information flow behind the decision for Transformers with the idea of Markov chain.
Abstract: Transformer-based models are receiving increasingly popularity in the field of computer vision, however, the corresponding interpretability study is less. As the simplest explainability method, visualization of attention weights exerts poor performance because of lacking association between the input and model decisions. In this study, we propose a method, named \textit{Transition Attention Maps}, to generate the saliency map concerning a specific target category. The proposed approach connects the idea of the Markov chain, to investigate the information flow across layers of the Transformer and combine the integrated gradients to compute the relevance of input tokens for the model decisions. We compare with other explainability methods using Vision Transformer as a benchmark and demonstrate that our method achieves better performance in various aspects. We open source the implementation of our approach at https://github.com/PaddlePaddle/InterpretDL.
0 Replies