InfoFlow: Reinforcing Search Agent Via Reward Density Optimization
By: Kun Luo , Hongjin Qian , Zheng Liu and more
Potential Business Impact:
Helps AI learn better by rewarding small steps.
Reinforcement Learning with Verifiable Rewards (RLVR) is a promising approach for enhancing agentic deep search. However, its application is often hindered by low \textbf{Reward Density} in deep search scenarios, where agents expend significant exploratory costs for infrequent and often null final rewards. In this paper, we formalize this challenge as the \textbf{Reward Density Optimization} problem, which aims to improve the reward obtained per unit of exploration cost. This paper introduce \textbf{InfoFlow}, a systematic framework that tackles this problem from three aspects. 1) \textbf{Subproblem decomposition}: breaking down long-range tasks to assign process rewards, thereby providing denser learning signals. 2) \textbf{Failure-guided hints}: injecting corrective guidance into stalled trajectories to increase the probability of successful outcomes. 3) \textbf{Dual-agent refinement}: employing a dual-agent architecture to offload the cognitive burden of deep exploration. A refiner agent synthesizes the search history, which effectively compresses the researcher's perceived trajectory, thereby reducing exploration cost and increasing the overall reward density. We evaluate InfoFlow on multiple agentic search benchmarks, where it significantly outperforms strong baselines, enabling lightweight LLMs to achieve performance comparable to advanced proprietary LLMs.
Similar Papers
RLFR: Extending Reinforcement Learning for LLMs with Flow Environment
Machine Learning (CS)
Helps AI learn better by watching how it thinks.
The Reasoning Boundary Paradox: How Reinforcement Learning Constrains Language Models
Artificial Intelligence
Fixes AI reasoning errors by focusing on hard problems.
Towards better dense rewards in Reinforcement Learning Applications
Artificial Intelligence
Teaches robots to learn tasks faster with better rewards.