Challenges in Understanding Modality Conflict in Vision-Language Models
By: Trang Nguyen , Jackson Michaels , Madalina Fiterau and more
Potential Business Impact:
Helps computers understand when pictures and words disagree.
This paper highlights the challenge of decomposing conflict detection from conflict resolution in Vision-Language Models (VLMs) and presents potential approaches, including using a supervised metric via linear probes and group-based attention pattern analysis. We conduct a mechanistic investigation of LLaVA-OV-7B, a state-of-the-art VLM that exhibits diverse resolution behaviors when faced with conflicting multimodal inputs. Our results show that a linearly decodable conflict signal emerges in the model's intermediate layers and that attention patterns associated with conflict detection and resolution diverge at different stages of the network. These findings support the hypothesis that detection and resolution are functionally distinct mechanisms. We discuss how such decomposition enables more actionable interpretability and targeted interventions for improving model robustness in challenging multimodal settings.
Similar Papers
Mixed Signals: Decoding VLMs' Reasoning and Underlying Bias in Vision-Language Conflict
Artificial Intelligence
Helps computers understand pictures and words better.
When Seeing Overrides Knowing: Disentangling Knowledge Conflicts in Vision-Language Models
CV and Pattern Recognition
Fixes AI mistakes by showing what it sees.
How Do Vision-Language Models Process Conflicting Information Across Modalities?
Computation and Language
AI learns which information to trust when confused.