KCR: Resolving Long-Context Knowledge Conflicts via Reasoning in LLMs
By: Xianda Zheng , Zijian Huang , Meng-Fen Chiang and more
Potential Business Impact:
Helps AI pick true facts from conflicts
Knowledge conflicts commonly arise across diverse sources, and their prevalence has increased with the advent of LLMs. When dealing with conflicts between multiple contexts, also known as \emph{inter-context knowledge conflicts}, LLMs are often confused by lengthy and conflicting contexts. To address this challenge, we propose the Knowledge Conflict Reasoning (KCR) framework, which enhances the ability of LLMs to resolve conflicting knowledge. The key idea of KCR is to train backbone LLMs to establish a correct reasoning process by rewarding them for selecting and adhering to the context with stronger logical consistency when presented with conflicting contexts. Specifically, we first extract reasoning paths, represented by either text or local knowledge graphs, from the conflicting long contexts. Subsequently, we employ Reinforcement Learning to encourage the model to learn the paradigm of reasoning process that follows correct reasoning paths rather than the incorrect counterparts. This enables the backbone models to genuinely acquire the capability to resolve inter-context knowledge conflicts within long contexts. Experimental results demonstrate that our framework significantly improves the ability of various backbone models to resolve knowledge conflicts in long-context scenarios, yielding substantial performance gains.
Similar Papers
Evaluating Long-Context Reasoning in LLM-Based WebAgents
Machine Learning (CS)
Helps AI remember long conversations to do tasks.
LoongRL:Reinforcement Learning for Advanced Reasoning over Long Contexts
Computation and Language
Helps computers understand long stories to answer questions.
Knowledge Reasoning Language Model: Unifying Knowledge and Language for Inductive Knowledge Graph Reasoning
Computation and Language
Makes computers understand facts better from mixed knowledge.