Score: 1

Uncovering and Mitigating Transient Blindness in Multimodal Model Editing

Published: November 17, 2025 | arXiv ID: 2511.13243v1

By: Xiaoqi Han , Ru Li , Ran Yi and more

Potential Business Impact:

Fixes AI that sees and reads better.

Business Areas:
Semantic Search Internet Services

Multimodal Model Editing (MMED) aims to correct erroneous knowledge in multimodal models. Existing evaluation methods, adapted from textual model editing, overstate success by relying on low-similarity or random inputs, obscure overfitting. We propose a comprehensive locality evaluation framework, covering three key dimensions: random-image locality, no-image locality, and consistent-image locality, operationalized through seven distinct data types, enabling a detailed and structured analysis of multimodal edits. We introduce De-VQA, a dynamic evaluation for visual question answering, uncovering a phenomenon we term transient blindness, overfitting to edit-similar text while ignoring visuals. Token analysis shows edits disproportionately affect textual tokens. We propose locality-aware adversarial losses to balance cross-modal representations. Empirical results demonstrate that our approach consistently outperforms existing baselines, reducing transient blindness and improving locality by 17% on average.

Repos / Data Links

Page Count
17 pages

Category
Computer Science:
Machine Learning (CS)