Knowledge Editing for Multi-Hop Question Answering Using Semantic Analysis
By: Dominic Simon, Rickard Ewetz
Potential Business Impact:
Makes AI answer harder questions by fixing its thinking.
Large Language Models (LLMs) require lightweight avenues of updating stored information that has fallen out of date. Knowledge Editing (KE) approaches have been successful in updating model knowledge for simple factual queries but struggle with handling tasks that require compositional reasoning such as multi-hop question answering (MQA). We observe that existing knowledge editors leverage decompositional techniques that result in illogical reasoning processes. In this paper, we propose a knowledge editor for MQA based on semantic analysis called CHECK. Our framework is based on insights from an analogy between compilers and reasoning using LLMs. Similar to how source code is first compiled before being executed, we propose to semantically analyze reasoning chains before executing the chains to answer questions. Reasoning chains with semantic errors are revised to ensure consistency through logic optimization and re-prompting the LLM model at a higher temperature. We evaluate the effectiveness of CHECK against five state-of-the-art frameworks on four datasets and achieve an average 22.8% improved MQA accuracy.
Similar Papers
Robust Knowledge Editing via Explicit Reasoning Chains for Distractor-Resilient Multi-Hop QA
Computation and Language
Teaches AI new facts without retraining.
ALEX:A Light Editing-knowledge Extractor
Artificial Intelligence
Helps AI learn new facts without forgetting old ones.
Avoiding Knowledge Edit Skipping in Multi-hop Question Answering with Guided Decomposition
Computation and Language
Helps AI remember new facts without retraining.