Interpreting ResNet-based CLIP via Neuron-Attention Decomposition

Published: 30 Sept 2025, Last Modified: 30 Sept 2025Mech Interp Workshop (NeurIPS 2025) PosterEveryoneRevisionsBibTeXCC BY 4.0
Open Source Links: https://edmundbu.github.io/clip-neur-attn/
Keywords: Applications of interpretability, Automated interpretability, Vision transformers
TL;DR: We introduce the neuron-head pairs of CLIP-ResNet as an improved interpretability unit, verified by quantitative and qualitative analysis, as well as successful application to downstream tasks such as semantic segmentation.
Abstract: We present a novel technique for interpreting the neurons in CLIP-ResNet by decomposing their contributions to the output into individual computation paths. More specifically, we analyze all pairwise combinations of neurons and the following attention heads of CLIP's attention-pooling layer. We find that these neuron-head pairs can be approximated by a single direction in CLIP-ResNet's image-text embedding space. Leveraging this insight, we interpret each neuron-head pair by associating it with text. Additionally, we find that only a sparse set of the neuron-head pairs have a significant contribution to the output value, and that some neuron-head pairs, while polysemantic, represent sub-concepts of their corresponding neurons. We use these observations for two applications. First, we utilize the pairs for training-free semantic segmentation, outperforming previous methods for CLIP-ResNet. Second, we use the contributions of neuron-head pairs to monitor dataset distribution shifts. Our results demonstrate that examining individual computation paths in neural networks uncovers interpretable units and that such units can be utilized for downstream tasks.
Submission Number: 80
Loading