Keywords: mechanistic interpretability, vision language models, VLMs
Abstract: Vision-Language models (VLMs) show impressive abilities to answer questions on visual inputs (e.g., counting objects in an image), yet demonstrate higher accuracies when performing an analogous task on text (e.g., counting words in a text).
We investigate this accuracy gap by identifying and comparing the circuits---the task-specific computational sub-graphs---in different modalities.
We show that while circuits are largely disjoint between modalities, they implement relatively similar functionalities: the differences lie primarily in processing modality-specific data positions (an image or a text sequence).
Zooming in on the image data representations, we observe they become aligned with the higher-performing analogous textual representations only towards later layers, too late in processing to effectively influence subsequent positions. 
To overcome this, we patch the representations of visual data tokens from later layers back into earlier layers. 
In experiments with multiple tasks and models, this simple intervention closes a third of the performance gap between the modalities, on average.
Our analysis sheds light on the multi-modal performance gap in VLMs and suggests a training-free approach for reducing it.
Supplementary Material:  zip
Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 8362
Loading