Score: 0

A Dual-Axis Taxonomy of Knowledge Editing for LLMs: From Mechanisms to Functions

Published: August 12, 2025 | arXiv ID: 2508.08795v1

By: Amir Mohammad Salehoof , Ali Ramezani , Yadollah Yaghoobzadeh and more

Potential Business Impact:

Updates computer brains with new facts quickly.

Large language models (LLMs) acquire vast knowledge from large text corpora, but this information can become outdated or inaccurate. Since retraining is computationally expensive, knowledge editing offers an efficient alternative -- modifying internal knowledge without full retraining. These methods aim to update facts precisely while preserving the model's overall capabilities. While existing surveys focus on the mechanism of editing (e.g., parameter changes vs. external memory), they often overlook the function of the knowledge being edited. This survey introduces a novel, complementary function-based taxonomy to provide a more holistic view. We examine how different mechanisms apply to various knowledge types -- factual, temporal, conceptual, commonsense, and social -- highlighting how editing effectiveness depends on the nature of the target knowledge. By organizing our review along these two axes, we map the current landscape, outline the strengths and limitations of existing methods, define the problem formally, survey evaluation tasks and datasets, and conclude with open challenges and future directions.

Country of Origin
🇮🇷 Iran

Page Count
13 pages

Category
Computer Science:
Artificial Intelligence