On the Limitations of Rank-One Model Editing in Answering Multi-hop Questions
By: Zhiyuan He , Binghan Chen , Tianxiang Xiong and more
Potential Business Impact:
Fixes AI's memory for complex, multi-step questions.
Recent advances in Knowledge Editing (KE), particularly Rank-One Model Editing (ROME), show superior efficiency over fine-tuning and in-context learning for updating single-hop facts in transformers. However, these methods face significant challenges when applied to multi-hop reasoning tasks requiring knowledge chaining. In this work, we study the effect of editing knowledge with ROME on different layer depths and identify three key failure modes. First, the "hopping-too-late" problem occurs as later layers lack access to necessary intermediate representations. Second, generalization ability deteriorates sharply when editing later layers. Third, the model overfits to edited knowledge, incorrectly prioritizing edited-hop answers regardless of context. To mitigate the issues of "hopping-too-late" and generalisation decay, we propose Redundant Editing, a simple yet effective strategy that enhances multi-hop reasoning. Our experiments demonstrate that this approach can improve accuracy on 2-hop questions by at least 15.5 percentage points, representing a 96% increase over the previous single-edit strategy, while trading off some specificity and language naturalness.
Similar Papers
Avoiding Knowledge Edit Skipping in Multi-hop Question Answering with Guided Decomposition
Computation and Language
Helps AI remember new facts without retraining.
Tracing and Reversing Rank-One Model Edits
Computation and Language
Finds and undoes bad changes in AI.
Robust Knowledge Editing via Explicit Reasoning Chains for Distractor-Resilient Multi-Hop QA
Computation and Language
Teaches AI new facts without retraining.