Score: 0

Mitigating Hallucination in Large Language Models (LLMs): An Application-Oriented Survey on RAG, Reasoning, and Agentic Systems

Published: October 28, 2025 | arXiv ID: 2510.24476v1

By: Yihan Li , Xiyuan Fu , Ghanshyam Verma and more

Potential Business Impact:

Makes AI tell the truth, not make things up.

Business Areas:
Augmented Reality Hardware, Software

Hallucination remains one of the key obstacles to the reliable deployment of large language models (LLMs), particularly in real-world applications. Among various mitigation strategies, Retrieval-Augmented Generation (RAG) and reasoning enhancement have emerged as two of the most effective and widely adopted approaches, marking a shift from merely suppressing hallucinations to balancing creativity and reliability. However, their synergistic potential and underlying mechanisms for hallucination mitigation have not yet been systematically examined. This survey adopts an application-oriented perspective of capability enhancement to analyze how RAG, reasoning enhancement, and their integration in Agentic Systems mitigate hallucinations. We propose a taxonomy distinguishing knowledge-based and logic-based hallucinations, systematically examine how RAG and reasoning address each, and present a unified framework supported by real-world applications, evaluations, and benchmarks.

Country of Origin
🇮🇪 Ireland

Page Count
25 pages

Category
Computer Science:
Computation and Language