Keywords: Multimodal adaptation, Vision–language models (VLMs / MLLMs), Mechanistic Interpretability, Spatial Reasoning, Sparse Autoencoders, Representation dynamics
TL;DR: This paper uses mechanistic interpretability to track how language model features adapt under multimodal fine-tuning, revealing interpretable feature rotations and specialized attention heads for spatial reasoning.
Abstract: Contemporary Vision–Language Models (VLMs) achieve strong performance on a wide range of tasks by pairing a vision encoder with a pre-trained language model, fine-tuned for visual–text inputs. Yet despite these gains, it remains unclear how language backbone representations adapt during multimodal training and when vision-specific capabilities emerge. In this work, we present the first mechanistic analysis of VLMs adaptation process. Using stage-wise model diffing, a technique that isolates representational changes introduced during multimodal fine-tuning, we reveal how a language model learns to "see". We first identify vision-preferring features that emerge or reorient during fine-tuning. We then show that a selective subset of these features reliably encodes spatial relations, revealed through controlled shifts to spatial prompts. Finally, we trace the causal activation of these features to a small group of attention heads. Our findings show that stage-wise model diffing reveals when and where spatially-grounded multimodal features arise. It also provides a clearer view of modality fusion by showing how visual grounding reshapes features that were previously text-only. This methodology enhances the interpretability of multimodal training and provides a foundation for understanding and refining how pretrained language models acquire vision-grounded capabilities.
Primary Area: interpretability and explainable AI
Submission Number: 22723
Loading