Score: 3

Benchmarking and Rethinking Knowledge Editing for Large Language Models

Published: May 24, 2025 | arXiv ID: 2505.18690v1

By: Guoxiu He , Xin Song , Futing Wang and more

Potential Business Impact:

Makes AI remember new facts better.

Business Areas:
Semantic Search Internet Services

Knowledge editing aims to update the embedded knowledge within Large Language Models (LLMs). However, existing approaches, whether through parameter modification or external memory integration, often suffer from inconsistent evaluation objectives and experimental setups. To address this gap, we conduct a comprehensive benchmarking study. In addition to fact-level datasets, we introduce more complex event-based datasets and general-purpose datasets drawn from other tasks. Our evaluation covers both instruction-tuned and reasoning-oriented LLMs, under a realistic autoregressive inference setting rather than teacher-forced decoding. Beyond single-edit assessments, we also evaluate multi-edit scenarios to better reflect practical demands. We employ four evaluation dimensions, including portability, and compare all recent methods against a simple and straightforward baseline named Selective Contextual Reasoning (SCR). Empirical results reveal that parameter-based editing methods perform poorly under realistic conditions. In contrast, SCR consistently outperforms them across all settings. This study offers new insights into the limitations of current knowledge editing methods and highlights the potential of context-based reasoning as a more robust alternative.

Country of Origin
πŸ‡ΈπŸ‡¬ πŸ‡¨πŸ‡³ China, Singapore


Page Count
22 pages

Category
Computer Science:
Computation and Language