Keywords: Vision transformers, Steering
TL;DR: The paper shows that specific attention heads govern how vision-language models resolve conflicts between internal knowledge and visual inputs, enabling controllable steering and more precise attribution than gradient-based methods.
Abstract: Vision-language models (VLMs) increasingly leverage diverse knowledge sources to address complex tasks, often encountering conflicts between their internal parametric knowledge and external information.
Knowledge conflicts can result in hallucinations and unreliable responses, but the mechanisms governing such interactions remain unknown.
To address this gap, we analyze the mechanisms that VLMs use to resolve cross-modal conflicts by introducing a dataset of multimodal counterfactual queries that deliberately contradict internal commonsense knowledge.
We localize with logit inspection a small set of heads that control the conflict.
Moreover, by modifying these heads, we can steer the model towards its internal knowledge or the visual inputs.
Finally, we show that attention from such heads pinpoints localized image regions driving visual overrides, outperforming gradient-based attribution in precision.
Submission Number: 17
Loading