Benchmarking and Rethinking Knowledge Editing for Large Language Models
By: Guoxiu He , Xin Song , Futing Wang and more
Potential Business Impact:
Makes AI remember new facts better.
Knowledge editing aims to update the embedded knowledge within Large Language Models (LLMs). However, existing approaches, whether through parameter modification or external memory integration, often suffer from inconsistent evaluation objectives and experimental setups. To address this gap, we conduct a comprehensive benchmarking study. In addition to fact-level datasets, we introduce more complex event-based datasets and general-purpose datasets drawn from other tasks. Our evaluation covers both instruction-tuned and reasoning-oriented LLMs, under a realistic autoregressive inference setting rather than teacher-forced decoding. Beyond single-edit assessments, we also evaluate multi-edit scenarios to better reflect practical demands. We employ four evaluation dimensions, including portability, and compare all recent methods against a simple and straightforward baseline named Selective Contextual Reasoning (SCR). Empirical results reveal that parameter-based editing methods perform poorly under realistic conditions. In contrast, SCR consistently outperforms them across all settings. This study offers new insights into the limitations of current knowledge editing methods and highlights the potential of context-based reasoning as a more robust alternative.
Similar Papers
Knowledge Updating? No More Model Editing! Just Selective Contextual Reasoning
Computation and Language
Lets computers learn new things without forgetting old ones.
UniEdit: A Unified Knowledge Editing Benchmark for Large Language Models
Computation and Language
Makes AI smarter and more truthful everywhere.
Beyond Memorization: A Rigorous Evaluation Framework for Medical Knowledge Editing
Computation and Language
Helps doctors update medical AI knowledge.