Seeing through the Conflict: Transparent Knowledge Conflict Handling in Retrieval-Augmented Generation
By: Hua Ye , Siyuan Chen , Ziqi Zhong and more
Potential Business Impact:
Fixes AI mistakes by checking facts and showing its work.
Large language models (LLMs) equipped with retrieval--the Retrieval-Augmented Generation (RAG) paradigm--should combine their parametric knowledge with external evidence, yet in practice they often hallucinate, over-trust noisy snippets, or ignore vital context. We introduce TCR (Transparent Conflict Resolution), a plug-and-play framework that makes this decision process observable and controllable. TCR (i) disentangles semantic match and factual consistency via dual contrastive encoders, (ii) estimates self-answerability to gauge confidence in internal memory, and (iii) feeds the three scalar signals to the generator through a lightweight soft-prompt with SNR-based weighting. Across seven benchmarks TCR improves conflict detection (+5-18 F1), raises knowledge-gap recovery by +21.4 pp and cuts misleading-context overrides by -29.3 pp, while adding only 0.3% parameters. The signals align with human judgements and expose temporal decision patterns.
Similar Papers
Probing Latent Knowledge Conflict for Faithful Retrieval-Augmented Generation
Computation and Language
Makes AI answers more truthful and less wrong.
TruthfulRAG: Resolving Factual-level Conflicts in Retrieval-Augmented Generation with Knowledge Graphs
Computation and Language
Fixes AI answers when its knowledge is wrong.
From Facts to Conclusions : Integrating Deductive Reasoning in Retrieval-Augmented LLMs
Computation and Language
Makes AI answers more truthful and explainable.