When Seeing Overrides Knowing: Disentangling Knowledge Conflicts in Vision-Language Models

ACL ARR 2025 May Submission4161 Authors

19 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Vision-language models (VLMs) increasingly leverage diverse knowledge sources to address complex tasks, inevitably encountering conflicts between their internal parametric knowledge and external information. Knowledge conflicts often result in hallucinations and unreliable responses, but the mechanisms governing such interactions remain unknown. To address this gap, we analyze the mechanisms VLMs use to resolve cross-modal conflicts by introducing a dataset of multimodal counterfactual queries that deliberately contradict internal commonsense knowledge. We localize with logit inspection a small set of heads that control the conflict. Moreover, by modifying these heads, we can steer the model towards its internal knowledge or the visual inputs. Finally, we show that attention from such heads pinpoints localized image regions driving visual overrides, outperforming gradient-based attribution in precision.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: knowledge tracing, model editing
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 4161
Loading