Robust Knowledge Editing via Explicit Reasoning Chains for Distractor-Resilient Multi-Hop QA
By: Yuchen Wu , Liang Ding , Li Shen and more
Potential Business Impact:
Teaches AI new facts without retraining.
Large language models (LLMs) encode vast amounts of world knowledge but remain static once trained, making the timely integration of emerging facts prohibitively expensive via full retraining. Knowledge-editing techniques have thus emerged to inject or overwrite specific facts into LLMs, yet they either over-rely on superficial cues or incur complex, iterative pipelines that collapse under noisy, multi-hop conditions. We introduce Reason-KE, an end-to-end reasoning-chain-based editing framework that steers a pretrained LLM through four structured stages-fact acknowledgment, relevance determination, selective application, and final reasoning-to filter distractors in a single pass. Trained on MQuAKE-CF with up to four irrelevant facts, Reason-KE elevates Qwen2.5-7B's multi-hop QA accuracy to 90.2% while suffering merely a 6.3% drop under heavy distraction and <1% when answers are leaked. Our quantitative analysis confirms Reason-KE's resilience and efficiency, establishing a new state-of-the-art for reliable LLM knowledge updates.
Similar Papers
ACE: Attribution-Controlled Knowledge Editing for Multi-hop Factual Recall
Computation and Language
Fixes AI's memory for complex, multi-step facts.
Knowledge Editing for Multi-Hop Question Answering Using Semantic Analysis
Artificial Intelligence
Makes AI answer harder questions by fixing its thinking.
Reason-KE++: Aligning the Process, Not Just the Outcome, for Faithful LLM Knowledge Editing
Computation and Language
Makes AI think correctly, not just copy.