GlobalRAG: Enhancing Global Reasoning in Multi-hop Question Answering via Reinforcement Learning
By: Jinchang Luo , Mingquan Cheng , Fan Wan and more
Potential Business Impact:
Helps computers answer hard questions by planning steps.
Reinforcement learning has recently shown promise in improving retrieval-augmented generation (RAG). Despite these advances, its effectiveness in multi-hop question answering (QA) remains limited by two fundamental limitations: (i) global planning absence to structure multi-step reasoning, and (ii) unfaithful execution, which hinders effective query formulation and consistent use of retrieved evidence. We propose GlobalRAG, a reinforcement learning framework designed to enhance global reasoning in multi-hop QA. GlobalRAG decomposes questions into subgoals, coordinates retrieval with reasoning, and refines evidence iteratively. To guide this process, we introduce Planning Quality Reward and SubGoal Completion Reward, which encourage coherent planning and reliable subgoal execution. In addition, a progressive weight annealing strategy balances process-oriented and outcome-based objectives. Extensive experiments on both in-domain and out-of-domain benchmarks demonstrate that GlobalRAG significantly outperforms strong baselines while using only 8k training data (42% of the training data used by strong baselines), achieving average improvements of 14.2% in both EM and F1.
Similar Papers
Towards Global Retrieval Augmented Generation: A Benchmark for Corpus-Level Reasoning
Computation and Language
Helps computers find answers across many documents.
Improving Retrieval-Augmented Generation through Multi-Agent Reinforcement Learning
Computation and Language
Makes AI answer questions more truthfully.
Credible Plan-Driven RAG Method for Multi-Hop Question Answering
Computation and Language
Answers hard questions by planning, doing, and checking.