Score: 1

MemEIC: A Step Toward Continual and Compositional Knowledge Editing

Published: October 29, 2025 | arXiv ID: 2510.25798v1

By: Jin Seong , Jiyun Park , Wencke Liermann and more

Potential Business Impact:

Teaches AI to learn new things without forgetting.

Business Areas:
Knowledge Management Administrative Services

The dynamic nature of information necessitates continuously updating large vision-language models (LVLMs). While recent knowledge editing techniques hint at promising directions, they often focus on editing a single modality (vision or language) in isolation. This prevalent practice neglects the inherent multimodality of LVLMs and the continuous nature of knowledge updates, potentially leading to suboptimal editing outcomes when considering the interplay between modalities and the need for ongoing knowledge refinement. To address these limitations, we propose MemEIC, a novel method for Continual and Compositional Knowledge Editing (CCKE) in LVLMs. MemEIC enables compositional editing of both visual and textual knowledge sequentially. Our approach employs a hybrid external-internal editor featuring a dual external memory for cross-modal evidence retrieval and dual LoRA adapters that facilitate disentangled parameter updates for each modality. A key component is a brain-inspired knowledge connector, activated selectively for compositional reasoning, that integrates information across different modalities. Experiments demonstrate that MemEIC significantly improves performance on complex multimodal questions and effectively preserves prior edits, setting a new benchmark for CCKE in LVLMs.

Country of Origin
🇰🇷 Korea, Republic of

Repos / Data Links

Page Count
38 pages

Category
Computer Science:
Machine Learning (CS)