GIM: Improved Interpretability for Large Language Models

18 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Interpretability, Explainability, Faithfulness, Mechanistic Interpretability, Feature attributions, Saliency, Circuit Identification, Self repair, Explainable LLM
TL;DR: We identify the attention self-repair effect and propose a new state-of-the-art gradient-based explanation method circumventing it.
Abstract: Ensuring faithful interpretability in large language models is imperative for trustworthy and reliable AI. A key obstacle is self-repair, a phenomenon where networks compensate for reduced signal in one component by amplifying others, masking the true importance of the ablated component. While prior work attributes self-repair to layer normalization and back-up components that compensate for ablated components, we identify a novel form occurring within the attention mechanism, where softmax redistribution conceals the influence of important attention scores. This leads traditional ablation and gradient-based methods to underestimate the significance of all components contributing to these attention scores. We introduce Gradient Interaction Modifications (GIM), a technique that accounts for self-repair during backpropagation. Extensive experiments across multiple large language models (Gemma 2B/9B, LLAMA 1B/3B/8B, Qwen 1.5B/3B) and diverse tasks demonstrate that GIM frequently achieves state-of-the-art results on circuit identification and feature attribution. Our work is a significant step toward better understanding the inner mechanisms of LLMs, which is crucial for improving them and ensuring their safety. Our code is available at https://anonymous.4open.science/r/explainable_transformer-D693.
Primary Area: interpretability and explainable AI
Submission Number: 12392
Loading