TGPO: Tree-Guided Preference Optimization for Robust Web Agent Reinforcement Learning
By: Ziyuan Chen , Zhenghui Zhao , Zhangye Han and more
Potential Business Impact:
Teaches computers to use websites better.
With the rapid advancement of large language models and vision-language models, employing large models as Web Agents has become essential for automated web interaction. However, training Web Agents with reinforcement learning faces critical challenges including credit assignment misallocation, prohibitively high annotation costs, and reward sparsity. To address these issues, we propose Tree-Guided Preference Optimization (TGPO), an offline reinforcement learning framework that proposes a tree-structured trajectory representation merging semantically identical states across trajectories to eliminate label conflicts. Our framework incorporates a Process Reward Model that automatically generates fine-grained rewards through subgoal progress, redundancy detection, and action verification. Additionally, a dynamic weighting mechanism prioritizes high-impact decision points during training. Experiments on Online-Mind2Web and our self-constructed C-WebShop datasets demonstrate that TGPO significantly outperforms existing methods, achieving higher success rates with fewer redundant steps.
Similar Papers
TGPO: Tree-Guided Preference Optimization for Robust Web Agent Reinforcement Learning
Machine Learning (CS)
Teaches computers to use websites better.
Tree Search for LLM Agent Reinforcement Learning
Machine Learning (CS)
Teaches AI to learn better from mistakes.
TreeGRPO: Tree-Advantage GRPO for Online RL Post-Training of Diffusion Models
Machine Learning (CS)
Trains AI to make better pictures much faster.