TreeAdv: Tree-Structured Advantage Redistribution for Group-Based RL
By: Lang Cao , Hui Ruan , Yongqian Li and more
Potential Business Impact:
Teaches AI to think smarter, not just longer.
Reinforcement learning with group-based objectives, such as Group Relative Policy Optimization (GRPO), is a common framework for aligning large language models on complex reasoning tasks. However, standard GRPO treats each rollout trajectory as an independent flat sequence and assigns a single sequence-level advantage to all tokens, which leads to sample inefficiency and a length bias toward verbose, redundant chains of thought without improving logical depth. We introduce TreeAdv (Tree-Structured Advantage Redistribution for Group-Based RL), which makes the tree structure of group rollouts explicit for both exploration and advantage assignment. Specifically, TreeAdv builds a group of trees (a forest) based on an entropy-driven sampling method where each tree branches at high-uncertainty decisions while sharing low-uncertainty tokens across rollouts. Then, TreeAdv aggregates token-level advantages for internal tree segments by redistributing the advantages of complete rollouts (all leaf nodes), and TreeAdv can easily apply to group-based objectives such as GRPO or GSPO. Across 10 math reasoning benchmarks, TreeAdv consistently outperforms GRPO and GSPO, while using substantially fewer generated tokens under identical supervision, data, and decoding budgets.
Similar Papers
TreeGRPO: Tree-Advantage GRPO for Online RL Post-Training of Diffusion Models
Machine Learning (CS)
Trains AI to make better pictures much faster.
Tree-OPO: Off-policy Monte Carlo Tree-Guided Advantage Optimization for Multistep Reasoning
Artificial Intelligence
Teaches computers to learn better from choices.
Tree Search for LLM Agent Reinforcement Learning
Machine Learning (CS)
Teaches AI to learn better from mistakes.