Towards Agentic RAG with Deep Reasoning: A Survey of RAG-Reasoning Systems in LLMs
By: Yangning Li , Weizhi Zhang , Yuyao Yang and more
Potential Business Impact:
Helps computers answer harder questions using facts.
Retrieval-Augmented Generation (RAG) lifts the factuality of Large Language Models (LLMs) by injecting external knowledge, yet it falls short on problems that demand multi-step inference; conversely, purely reasoning-oriented approaches often hallucinate or mis-ground facts. This survey synthesizes both strands under a unified reasoning-retrieval perspective. We first map how advanced reasoning optimizes each stage of RAG (Reasoning-Enhanced RAG). Then, we show how retrieved knowledge of different type supply missing premises and expand context for complex inference (RAG-Enhanced Reasoning). Finally, we spotlight emerging Synergized RAG-Reasoning frameworks, where (agentic) LLMs iteratively interleave search and reasoning to achieve state-of-the-art performance across knowledge-intensive benchmarks. We categorize methods, datasets, and open challenges, and outline research avenues toward deeper RAG-Reasoning systems that are more effective, multimodally-adaptive, trustworthy, and human-centric. The collection is available at https://github.com/DavidZWZ/Awesome-RAG-Reasoning.
Similar Papers
Reasoning RAG via System 1 or System 2: A Survey on Reasoning Agentic Retrieval-Augmented Generation for Industry Challenges
Artificial Intelligence
Lets AI learn and use outside information better.
Synergizing RAG and Reasoning: A Systematic Review
Information Retrieval
Helps smart computer programs solve harder problems.
Agentic Retrieval-Augmented Generation: A Survey on Agentic RAG
Artificial Intelligence
AI agents help computers answer questions with new info.