Score: 1

When Seeing Overrides Knowing: Disentangling Knowledge Conflicts in Vision-Language Models

Published: July 18, 2025 | arXiv ID: 2507.13868v1

By: Francesco Ortu , Zhijing Jin , Diego Doimo and more

Potential Business Impact:

Fixes AI mistakes by showing what it sees.

Business Areas:
Visual Search Internet Services

Vision-language models (VLMs) increasingly leverage diverse knowledge sources to address complex tasks, often encountering conflicts between their internal parametric knowledge and external information. Knowledge conflicts can result in hallucinations and unreliable responses, but the mechanisms governing such interactions remain unknown. To address this gap, we analyze the mechanisms that VLMs use to resolve cross-modal conflicts by introducing a dataset of multimodal counterfactual queries that deliberately contradict internal commonsense knowledge. We localize with logit inspection a small set of heads that control the conflict. Moreover, by modifying these heads, we can steer the model towards its internal knowledge or the visual inputs. Finally, we show that attention from such heads pinpoints localized image regions driving visual overrides, outperforming gradient-based attribution in precision.

Country of Origin
🇨🇦 🇮🇹 Canada, Italy

Page Count
16 pages

Category
Computer Science:
CV and Pattern Recognition