Score: 0

Towards Meta-Cognitive Knowledge Editing for Multimodal LLMs

Published: September 6, 2025 | arXiv ID: 2509.05714v1

By: Zhaoyu Fan , Kaihang Pan , Mingze Zhou and more

Potential Business Impact:

Teaches AI to fix its own wrong ideas.

Business Areas:
Semantic Search Internet Services

Knowledge editing enables multimodal large language models (MLLMs) to efficiently update outdated or incorrect information. However, existing benchmarks primarily emphasize cognitive-level modifications while lacking a focus on deeper meta-cognitive processes. To bridge this gap, we introduce CogEdit, a novel benchmark designed to evaluate MLLMs' meta-cognitive knowledge editing abilities across three levels: (1) Counterfactual-Driven Editing, assessing self-awareness of knowledge correctness changes; (2) Boundary Constraint Editing, ensuring appropriate generalization without unintended interference; and (3) Noise-Robust Editing, promoting reflective evaluation of uncertain information. To advance meta-cognitive editing, we propose MIND (Meta-cognitive INtegrated Dynamic Knowledge Editing), a framework that constructs a meta-knowledge memory for self-awareness, employs game-theoretic interactions to monitor knowledge activation, and incorporates label refinement for noise-robust updates. Extensive experiments show that MIND significantly outperforms existing cognitive editing approaches, achieving strong performance on both traditional and meta-cognitive knowledge editing benchmarks.

Country of Origin
🇨🇳 China

Page Count
15 pages

Category
Computer Science:
Artificial Intelligence