When Seeing Overrides Knowing: Disentangling Knowledge Conflicts in Vision-Language Models
By: Francesco Ortu , Zhijing Jin , Diego Doimo and more
Potential Business Impact:
Fixes AI mistakes by showing what it sees.
Vision-language models (VLMs) increasingly leverage diverse knowledge sources to address complex tasks, often encountering conflicts between their internal parametric knowledge and external information. Knowledge conflicts can result in hallucinations and unreliable responses, but the mechanisms governing such interactions remain unknown. To address this gap, we analyze the mechanisms that VLMs use to resolve cross-modal conflicts by introducing a dataset of multimodal counterfactual queries that deliberately contradict internal commonsense knowledge. We localize with logit inspection a small set of heads that control the conflict. Moreover, by modifying these heads, we can steer the model towards its internal knowledge or the visual inputs. Finally, we show that attention from such heads pinpoints localized image regions driving visual overrides, outperforming gradient-based attribution in precision.
Similar Papers
SegSub: Evaluating Robustness to Knowledge Conflicts and Hallucinations in Vision-Language Models
CV and Pattern Recognition
Fixes AI that makes up facts about pictures.
Challenges in Understanding Modality Conflict in Vision-Language Models
Machine Learning (CS)
Helps computers understand when pictures and words disagree.
Mixed Signals: Decoding VLMs' Reasoning and Underlying Bias in Vision-Language Conflict
Artificial Intelligence
Helps computers understand pictures and words better.