SIRAG: Towards Stable and Interpretable RAG with A Process-Supervised Multi-Agent Framework
By: Junlin Wang , Zehao Wu , Shaowei Lu and more
Potential Business Impact:
Makes AI smarter by checking facts before answering.
Retrieval-Augmented Generation (RAG) enables large language models (LLMs) to access external knowledge sources, but the effectiveness of RAG relies on the coordination between the retriever and the generator. Since these components are developed independently, their interaction is often suboptimal: the retriever may return irrelevant or redundant documents, while the generator may fail to fully leverage retrieved evidence. In this work, we propose a process-supervised multi-agent framework to bridge the gap between retriever and generator. The framework introduces two lightweight agents: a Decision Maker, which determines when to continue retrieval or stop for answer generation, and a Knowledge Selector, which filters retrieved documents to retain only the most useful evidence. To provide fine-grained supervision, we employ an LLM-as-a-Judge that evaluates each intermediate action with process-level rewards, ensuring more accurate credit assignment than relying solely on final answer correctness. We further adopt a tree-structured rollout strategy to explore diverse reasoning paths, and train both agents with Proximal Policy Optimization (PPO) in an end-to-end manner. Experiments on single-hop and multi-hop question answering benchmarks show that our approach achieves higher accuracy, more stable convergence, and produces more interpretable reasoning trajectories compared with standard RAG baselines. Importantly, the proposed framework is modular and plug-and-play, requiring no modification to the retriever or generator, making it practical for real-world RAG applications.
Similar Papers
Improving Retrieval-Augmented Generation through Multi-Agent Reinforcement Learning
Computation and Language
Makes AI answer questions more truthfully.
Improving Retrieval-Augmented Generation through Multi-Agent Reinforcement Learning
Computation and Language
Makes AI answers more truthful and accurate.
Insight-RAG: Enhancing LLMs with Insight-Driven Augmentation
Computation and Language
Helps computers find better answers from many texts.