Resolving UnderEdit & OverEdit with Iterative & Neighbor-Assisted Model Editing
By: Bhiman Kumar Baghel , Scott M. Jordan , Zheyuan Ryan Shi and more
Potential Business Impact:
Updates AI knowledge without breaking other facts.
Large Language Models (LLMs) are widely deployed in downstream tasks, but keeping their knowledge up-to-date via retraining or fine-tuning is often computationally expensive. Model editing provides a more efficient alternative by updating a targeted subset of parameters, which often follows the locate-and-edit paradigm. Despite this efficiency, existing methods are limited: edits may fail to inject knowledge (UnderEdit) or unintentionally disrupt unrelated neighboring knowledge (OverEdit). To address these challenges, we propose two complementary methods: iterative model editing, which applies successive edits to mitigate UnderEdit, and neighbor-assisted model editing, which incorporates neighboring knowledge during editing to reduce OverEdit. Our extensive experiments show that these techniques improve editing performance across multiple LLMs, algorithms, and benchmarks, reducing UnderEdit by up to 38 percentage points and OverEdit by up to 6, while remaining broadly applicable to any locate-and-edit method.
Similar Papers
UniEdit: A Unified Knowledge Editing Benchmark for Large Language Models
Computation and Language
Makes AI smarter and more truthful everywhere.
One for All: Update Parameterized Knowledge Across Multiple Models
Computation and Language
Updates many AI models at once with new facts.
HyperEdit: Unlocking Instruction-based Text Editing in LLMs via Hypernetworks
Computation and Language
Fixes computer code with fewer mistakes.