WebAnchor: Anchoring Agent Planning to Stabilize Long-Horizon Web Reasoning
By: Yu Xinmiao , Zhang Liwen , Feng Xiaocheng and more
Potential Business Impact:
Helps AI plan better for complex online tasks.
Large Language Model(LLM)-based agents have shown strong capabilities in web information seeking, with reinforcement learning (RL) becoming a key optimization paradigm. However, planning remains a bottleneck, as existing methods struggle with long-horizon strategies. Our analysis reveals a critical phenomenon, plan anchor, where the first reasoning step disproportionately impacts downstream behavior in long-horizon web reasoning tasks. Current RL algorithms, fail to account for this by uniformly distributing rewards across the trajectory. To address this, we propose Anchor-GRPO, a two-stage RL framework that decouples planning and execution. In Stage 1, the agent optimizes its first-step planning using fine-grained rubrics derived from self-play experiences and human calibration. In Stage 2, execution is aligned with the initial plan through sparse rewards, ensuring stable and efficient tool usage. We evaluate Anchor-GRPO on four benchmarks: BrowseComp, BrowseComp-Zh, GAIA, and XBench-DeepSearch. Across models from 3B to 30B, Anchor-GRPO outperforms baseline GRPO and First-step GRPO, improving task success and tool efficiency. Notably, WebAnchor-30B achieves 46.0% pass@1 on BrowseComp and 76.4% on GAIA. Anchor-GRPO also demonstrates strong scalability, getting higher accuracy as model size and context length increase.
Similar Papers
Stabilizing Reinforcement Learning for Honesty Alignment in Language Models on Deductive Reasoning
Computation and Language
Teaches AI to reason honestly and avoid mistakes.
WebExplorer: Explore and Evolve for Training Long-Horizon Web Agents
Computation and Language
Helps computers find answers by searching the web.
WorkForceAgent-R1: Incentivizing Reasoning Capability in LLM-based Web Agents via Reinforcement Learning
Computation and Language
Helps computers do complex online jobs better.