Accommodate Knowledge Conflicts in Retrieval-augmented LLMs: Towards Reliable Response Generation in the Wild
By: Jiatai Wang , Zhiwei Xu , Di Jin and more
Potential Business Impact:
Helps computers pick the right answer when confused.
The proliferation of large language models (LLMs) has significantly advanced information retrieval systems, particularly in response generation (RG). Unfortunately, LLMs often face knowledge conflicts between internal memory and retrievaled external information, arising from misinformation, biases, or outdated knowledge. These conflicts undermine response reliability and introduce uncertainty in decision-making. In this work, we analyze how LLMs navigate knowledge conflicts from an information-theoretic perspective and reveal that when conflicting and supplementary information exhibit significant differences, LLMs confidently resolve their preferences. However, when the distinction is ambiguous, LLMs experience heightened uncertainty. Based on this insight, we propose Swin-VIB, a novel framework that integrates a pipeline of variational information bottleneck models into adaptive augmentation of retrieved information and guiding LLM preference in response generation. Extensive experiments on single-choice, open-ended question-answering (QA), and retrieval augmented generation (RAG) validate our theoretical findings and demonstrate the efficacy of Swin-VIB. Notably, our method improves single-choice task accuracy by at least 7.54\% over competitive baselines.
Similar Papers
LLM-Independent Adaptive RAG: Let the Question Speak for Itself
Computation and Language
Smartly finds answers, saving computer power.
Retrieval-Augmented Generation with Conflicting Evidence
Computation and Language
AI agents debate to find true answers.
Retrieval Augmented Question Answering: When Should LLMs Admit Ignorance?
Computation and Language
Helps computers answer questions better with less info.