ATMAN: Understanding Transformer Predictions Through Memory Efficient Attention Manipulation

Published: 21 Sept 2023, Last Modified: 26 Dec 2023NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: explainability, attention manipulation, perturbation, large language model, multi-modality, generative decoder, efficiency, transformer
TL;DR: We show how attention manipulation can be used as a perturbation method; to steer or explain large language models outputs w.r.t. the input relevance; on text and text/visual models.
Abstract: Generative transformer models have become increasingly complex, with large numbers of parameters and the ability to process multiple input modalities. Current methods for explaining their predictions are resource-intensive. Most crucially, they require prohibitively large amounts of additional memory, since they rely on backpropagation which allocates almost twice as much GPU memory as the forward pass. This makes it difficult, if not impossible, to use explanations in production. We present AtMan that provides explanations of generative transformer models at almost no extra cost. Specifically, AtMan is a modality-agnostic perturbation method that manipulates the attention mechanisms of transformers to produce relevance maps for the input with respect to the output prediction. Instead of using backpropagation, AtMan applies a parallelizable token-based search method relying on cosine similarity neighborhood in the embedding space. Our exhaustive experiments on text and image-text benchmarks demonstrate that AtMan outperforms current state-of-the-art gradient-based methods on several metrics while being computationally efficient. As such, AtMan is suitable for use in large model inference deployments.
Supplementary Material: pdf
Submission Number: 3268
Loading