DeepPlanner: Scaling Planning Capability for Deep Research Agents via Advantage Shaping
By: Wei Fan , Wenlin Yao , Zheng Li and more
Potential Business Impact:
Helps smart computer programs plan better.
Large language models (LLMs) augmented with multi-step reasoning and action generation abilities have shown promise in leveraging external tools to tackle complex tasks that require long-horizon planning. However, existing approaches either rely on implicit planning in the reasoning stage or introduce explicit planners without systematically addressing how to optimize the planning stage. As evidence, we observe that under vanilla reinforcement learning (RL), planning tokens exhibit significantly higher entropy than other action tokens, revealing uncertain decision points that remain under-optimized. To address this, we propose DeepPlanner, an end-to-end RL framework that effectively enhances the planning capabilities of deep research agents. Our approach shapes token-level advantage with an entropy-based term to allocate larger updates to high entropy tokens, and selectively upweights sample-level advantages for planning-intensive rollouts. Extensive experiments across seven deep research benchmarks demonstrate that DeepPlanner improves planning quality and achieves state-of-the-art results under a substantially lower training budget.
Similar Papers
AI-SearchPlanner: Modular Agentic Search via Pareto-Optimal Multi-Objective Reinforcement Learning
Artificial Intelligence
Helps AI find answers better by planning searches.
AI-SearchPlanner: Modular Agentic Search via Pareto-Optimal Multi-Objective Reinforcement Learning
Artificial Intelligence
Helps AI find answers better by planning searches.
Encouraging Good Processes Without the Need for Good Answers: Reinforcement Learning for LLM Agent Planning
Machine Learning (CS)
Teaches AI to plan better, making answers smarter.