Score: 1

Whose Facts Win? LLM Source Preferences under Knowledge Conflicts

Published: January 7, 2026 | arXiv ID: 2601.03746v1

By: Jakob Schuster, Vagrant Gautam, Katja Markert

Potential Business Impact:

Makes AI trust reliable sources more, not just loud ones.

Business Areas:
Semantic Search Internet Services

As large language models (LLMs) are more frequently used in retrieval-augmented generation pipelines, it is increasingly relevant to study their behavior under knowledge conflicts. Thus far, the role of the source of the retrieved information has gone unexamined. We address this gap with a novel framework to investigate how source preferences affect LLM resolution of inter-context knowledge conflicts in English, motivated by interdisciplinary research on credibility. With a comprehensive, tightly-controlled evaluation of 13 open-weight LLMs, we find that LLMs prefer institutionally-corroborated information (e.g., government or newspaper sources) over information from people and social media. However, these source preferences can be reversed by simply repeating information from less credible sources. To mitigate repetition effects and maintain consistent preferences, we propose a novel method that reduces repetition bias by up to 99.8%, while also maintaining at least 88.8% of original preferences. We release all data and code to encourage future work on credibility and source preferences in knowledge-intensive NLP.

Country of Origin
🇩🇪 Germany

Repos / Data Links

Page Count
28 pages

Category
Computer Science:
Computation and Language