Tree Search for LLM Agent Reinforcement Learning
By: Yuxiang Ji , Ziyu Ma , Yong Wang and more
Potential Business Impact:
Teaches AI to learn better from mistakes.
Recent advances in reinforcement learning (RL) have significantly enhanced the agentic capabilities of large language models (LLMs). In long-term and multi-turn agent tasks, existing approaches driven solely by outcome rewards often suffer from the problem of sparse supervision. To address the challenge, we propose Tree-based Group Relative Policy Optimization (Tree-GRPO), a grouped agent RL method based on tree search, where each tree node represents the complete agent interaction step. By sharing common prefixes, the tree search sampling increases the number of rollouts achievable within a fixed budget of tokens or tool calls. Moreover, we find that the tree-structured trajectory naturally allows the construction of step-wise process supervised signals even using only the outcome reward. Based on this, Tree-GRPO estimates the grouped relative advantages both on intra-tree and inter-tree levels. Through theoretical analysis, we demonstrate that the objective of intra-tree level group relative policy optimization is equivalent to that of step-level direct preference learning. Experiments across 11 datasets and 3 types of QA tasks demonstrate the superiority of the proposed tree-based RL over the chain-based RL method.
Similar Papers
TreeGRPO: Tree-Advantage GRPO for Online RL Post-Training of Diffusion Models
Machine Learning (CS)
Trains AI to make better pictures much faster.
Tree-OPO: Off-policy Monte Carlo Tree-Guided Advantage Optimization for Multistep Reasoning
Artificial Intelligence
Teaches computers to learn better from choices.
Training-Free Group Relative Policy Optimization
Computation and Language
Teaches computers to solve new problems better.