Towards Meta-Cognitive Knowledge Editing for Multimodal LLMs
By: Zhaoyu Fan , Kaihang Pan , Mingze Zhou and more
Potential Business Impact:
Teaches AI to fix its own wrong ideas.
Knowledge editing enables multimodal large language models (MLLMs) to efficiently update outdated or incorrect information. However, existing benchmarks primarily emphasize cognitive-level modifications while lacking a focus on deeper meta-cognitive processes. To bridge this gap, we introduce CogEdit, a novel benchmark designed to evaluate MLLMs' meta-cognitive knowledge editing abilities across three levels: (1) Counterfactual-Driven Editing, assessing self-awareness of knowledge correctness changes; (2) Boundary Constraint Editing, ensuring appropriate generalization without unintended interference; and (3) Noise-Robust Editing, promoting reflective evaluation of uncertain information. To advance meta-cognitive editing, we propose MIND (Meta-cognitive INtegrated Dynamic Knowledge Editing), a framework that constructs a meta-knowledge memory for self-awareness, employs game-theoretic interactions to monitor knowledge activation, and incorporates label refinement for noise-robust updates. Extensive experiments show that MIND significantly outperforms existing cognitive editing approaches, achieving strong performance on both traditional and meta-cognitive knowledge editing benchmarks.
Similar Papers
MindBridge: Scalable and Cross-Model Knowledge Editing via Memory-Augmented Modality
Artificial Intelligence
Keeps AI knowledge up-to-date across different programs.
Beyond Memorization: A Rigorous Evaluation Framework for Medical Knowledge Editing
Computation and Language
Helps doctors update medical AI knowledge.
MultiMedEdit: A Scenario-Aware Benchmark for Evaluating Knowledge Editing in Medical VQA
Artificial Intelligence
Helps AI learn new medical facts from pictures.