Score: 1

Challenges in Understanding Modality Conflict in Vision-Language Models

Published: September 2, 2025 | arXiv ID: 2509.02805v1

By: Trang Nguyen , Jackson Michaels , Madalina Fiterau and more

Potential Business Impact:

Helps computers understand when pictures and words disagree.

Business Areas:
Computer Vision Hardware, Software

This paper highlights the challenge of decomposing conflict detection from conflict resolution in Vision-Language Models (VLMs) and presents potential approaches, including using a supervised metric via linear probes and group-based attention pattern analysis. We conduct a mechanistic investigation of LLaVA-OV-7B, a state-of-the-art VLM that exhibits diverse resolution behaviors when faced with conflicting multimodal inputs. Our results show that a linearly decodable conflict signal emerges in the model's intermediate layers and that attention patterns associated with conflict detection and resolution diverge at different stages of the network. These findings support the hypothesis that detection and resolution are functionally distinct mechanisms. We discuss how such decomposition enables more actionable interpretability and targeted interventions for improving model robustness in challenging multimodal settings.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
5 pages

Category
Computer Science:
Machine Learning (CS)