A Dual-Axis Taxonomy of Knowledge Editing for LLMs: From Mechanisms to Functions
By: Amir Mohammad Salehoof , Ali Ramezani , Yadollah Yaghoobzadeh and more
Potential Business Impact:
Updates computer brains with new facts quickly.
Large language models (LLMs) acquire vast knowledge from large text corpora, but this information can become outdated or inaccurate. Since retraining is computationally expensive, knowledge editing offers an efficient alternative -- modifying internal knowledge without full retraining. These methods aim to update facts precisely while preserving the model's overall capabilities. While existing surveys focus on the mechanism of editing (e.g., parameter changes vs. external memory), they often overlook the function of the knowledge being edited. This survey introduces a novel, complementary function-based taxonomy to provide a more holistic view. We examine how different mechanisms apply to various knowledge types -- factual, temporal, conceptual, commonsense, and social -- highlighting how editing effectiveness depends on the nature of the target knowledge. By organizing our review along these two axes, we map the current landscape, outline the strengths and limitations of existing methods, define the problem formally, survey evaluation tasks and datasets, and conclude with open challenges and future directions.
Similar Papers
Can We Edit LLMs for Long-Tail Biomedical Knowledge?
Computation and Language
Helps computers learn rare medical facts better.
Knowledge Updating? No More Model Editing! Just Selective Contextual Reasoning
Computation and Language
Lets computers learn new things without forgetting old ones.
Towards Meta-Cognitive Knowledge Editing for Multimodal LLMs
Artificial Intelligence
Teaches AI to fix its own wrong ideas.