Score: 1

KCR: Resolving Long-Context Knowledge Conflicts via Reasoning in LLMs

Published: August 2, 2025 | arXiv ID: 2508.01273v2

By: Xianda Zheng , Zijian Huang , Meng-Fen Chiang and more

Potential Business Impact:

Helps AI pick true facts from conflicts

Knowledge conflicts commonly arise across diverse sources, and their prevalence has increased with the advent of LLMs. When dealing with conflicts between multiple contexts, also known as \emph{inter-context knowledge conflicts}, LLMs are often confused by lengthy and conflicting contexts. To address this challenge, we propose the Knowledge Conflict Reasoning (KCR) framework, which enhances the ability of LLMs to resolve conflicting knowledge. The key idea of KCR is to train backbone LLMs to establish a correct reasoning process by rewarding them for selecting and adhering to the context with stronger logical consistency when presented with conflicting contexts. Specifically, we first extract reasoning paths, represented by either text or local knowledge graphs, from the conflicting long contexts. Subsequently, we employ Reinforcement Learning to encourage the model to learn the paradigm of reasoning process that follows correct reasoning paths rather than the incorrect counterparts. This enables the backbone models to genuinely acquire the capability to resolve inter-context knowledge conflicts within long contexts. Experimental results demonstrate that our framework significantly improves the ability of various backbone models to resolve knowledge conflicts in long-context scenarios, yielding substantial performance gains.

Country of Origin
🇳🇿 🇹🇼 🇨🇳 Taiwan, Province of China, New Zealand, China

Page Count
16 pages

Category
Computer Science:
Artificial Intelligence