Search and Refine During Think: Facilitating Knowledge Refinement for Improved Retrieval-Augmented Reasoning
By: Yaorui Shi , Sihang Li , Chang Wu and more
Potential Business Impact:
Helps computers find better facts to answer questions.
Large language models have demonstrated impressive reasoning capabilities but are inherently limited by their knowledge reservoir. Retrieval-augmented reasoning mitigates this limitation by allowing LLMs to query external resources, but existing methods often retrieve irrelevant or noisy information, hindering accurate reasoning. In this paper, we propose AutoRefine, a reinforcement learning post-training framework that adopts a new "search-and-refine-during-think" paradigm. AutoRefine introduces explicit knowledge refinement steps between successive search calls, enabling the model to iteratively filter, distill, and organize evidence before generating an answer. Furthermore, we incorporate tailored retrieval-specific rewards alongside answer correctness rewards using group relative policy optimization. Experiments on single-hop and multi-hop QA benchmarks demonstrate that AutoRefine significantly outperforms existing approaches, particularly in complex, multi-hop reasoning scenarios. Detailed analysis shows that AutoRefine issues frequent, higher-quality searches and synthesizes evidence effectively.
Similar Papers
Efficient Post-Training Refinement of Latent Reasoning in Large Language Models
Computation and Language
Improves AI thinking for better answers.
SmartSearch: Process Reward-Guided Query Refinement for Search Agents
Artificial Intelligence
Improves computer searches for better answers.
Reinforcement Fine-Tuning for Reasoning towards Multi-Step Multi-Source Search in Large Language Models
Information Retrieval
Helps AI answer questions about new events faster.