Massive Editing for Large Language Models Based on Dynamic Weight Generation
By: Wentao Wan , Qiqing Lao , Zhiwei Xie and more
Potential Business Impact:
Changes AI's knowledge without retraining it.
Knowledge Editing (KE) is a field that studies how to modify some knowledge in Large Language Models (LLMs) at a low cost (compared to pre-training). Currently, performing large-scale edits on LLMs while ensuring the Reliability, Generality, and Locality metrics of the edits remain a challenge. This paper proposes a Massive editing approach for LLMs based on dynamic weight Generation (MeG). Our MeG involves attaching a dynamic weight neuron to specific layers of the LLMs and using a diffusion model to conditionally generate the weights of this neuron based on the input query required for the knowledge. This allows the use of adding a single dynamic weight neuron to achieve the goal of large-scale knowledge editing. Experiments show that our MeG can significantly improve the performance of large-scale KE in terms of Reliability, Generality, and Locality metrics compared to existing knowledge editing methods, particularly with a high percentage point increase in the absolute value index for the Locality metric, demonstrating the advantages of our proposed method.
Similar Papers
MultiMedEdit: A Scenario-Aware Benchmark for Evaluating Knowledge Editing in Medical VQA
Artificial Intelligence
Helps AI learn new medical facts from pictures.
Knowledge Editing for Multi-Hop Question Answering Using Semantic Analysis
Artificial Intelligence
Makes AI answer harder questions by fixing its thinking.
MobiEdit: Resource-efficient Knowledge Editing for Personalized On-device LLMs
Machine Learning (CS)
Lets phones update AI without needing the internet.