Score: 1

TreeAdv: Tree-Structured Advantage Redistribution for Group-Based RL

Published: January 7, 2026 | arXiv ID: 2601.03703v1

By: Lang Cao , Hui Ruan , Yongqian Li and more

BigTech Affiliations: Huawei

Potential Business Impact:

Teaches AI to think smarter, not just longer.

Business Areas:
A/B Testing Data and Analytics

Reinforcement learning with group-based objectives, such as Group Relative Policy Optimization (GRPO), is a common framework for aligning large language models on complex reasoning tasks. However, standard GRPO treats each rollout trajectory as an independent flat sequence and assigns a single sequence-level advantage to all tokens, which leads to sample inefficiency and a length bias toward verbose, redundant chains of thought without improving logical depth. We introduce TreeAdv (Tree-Structured Advantage Redistribution for Group-Based RL), which makes the tree structure of group rollouts explicit for both exploration and advantage assignment. Specifically, TreeAdv builds a group of trees (a forest) based on an entropy-driven sampling method where each tree branches at high-uncertainty decisions while sharing low-uncertainty tokens across rollouts. Then, TreeAdv aggregates token-level advantages for internal tree segments by redistributing the advantages of complete rollouts (all leaf nodes), and TreeAdv can easily apply to group-based objectives such as GRPO or GSPO. Across 10 math reasoning benchmarks, TreeAdv consistently outperforms GRPO and GSPO, while using substantially fewer generated tokens under identical supervision, data, and decoding budgets.

Country of Origin
🇨🇳 China

Page Count
16 pages

Category
Computer Science:
Machine Learning (CS)