Model Merging for Knowledge Editing
By: Zichuan Fu , Xian Wu , Guojing Li and more
Potential Business Impact:
Keeps AI smart and up-to-date.
Large Language Models (LLMs) require continuous updates to maintain accurate and current knowledge as the world evolves. While existing knowledge editing approaches offer various solutions for knowledge updating, they often struggle with sequential editing scenarios and harm the general capabilities of the model, thereby significantly hampering their practical applicability. This paper proposes a two-stage framework combining robust supervised fine-tuning (R-SFT) with model merging for knowledge editing. Our method first fine-tunes the LLM to internalize new knowledge fully, then merges the fine-tuned model with the original foundation model to preserve newly acquired knowledge and general capabilities. Experimental results demonstrate that our approach significantly outperforms existing methods in sequential editing while better preserving the original performance of the model, all without requiring any architectural changes. Code is available at: https://github.com/Applied-Machine-Learning-Lab/MM4KE.
Similar Papers
Training-free LLM Merging for Multi-task Learning
Computation and Language
Combines smart computer brains for more tasks.
Unlocking Efficient Long-to-Short LLM Reasoning with Model Merging
Computation and Language
Makes smart computers think faster, not too much.
One for All: Update Parameterized Knowledge Across Multiple Models
Computation and Language
Updates many AI models at once with new facts.