Score: 3

Model Merging for Knowledge Editing

Published: June 14, 2025 | arXiv ID: 2506.12384v1

By: Zichuan Fu , Xian Wu , Guojing Li and more

BigTech Affiliations: Tencent

Potential Business Impact:

Keeps AI smart and up-to-date.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) require continuous updates to maintain accurate and current knowledge as the world evolves. While existing knowledge editing approaches offer various solutions for knowledge updating, they often struggle with sequential editing scenarios and harm the general capabilities of the model, thereby significantly hampering their practical applicability. This paper proposes a two-stage framework combining robust supervised fine-tuning (R-SFT) with model merging for knowledge editing. Our method first fine-tunes the LLM to internalize new knowledge fully, then merges the fine-tuned model with the original foundation model to preserve newly acquired knowledge and general capabilities. Experimental results demonstrate that our approach significantly outperforms existing methods in sequential editing while better preserving the original performance of the model, all without requiring any architectural changes. Code is available at: https://github.com/Applied-Machine-Learning-Lab/MM4KE.

Country of Origin
🇭🇰 🇨🇳 China, Hong Kong

Repos / Data Links

Page Count
11 pages

Category
Computer Science:
Artificial Intelligence