Score: 2

Tree-OPO: Off-policy Monte Carlo Tree-Guided Advantage Optimization for Multistep Reasoning

Published: September 11, 2025 | arXiv ID: 2509.09284v1

By: Bingning Huang, Tu Nguyen, Matthieu Zimmer

BigTech Affiliations: Huawei

Potential Business Impact:

Teaches computers to learn better from choices.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Recent advances in reasoning with large language models (LLMs) have shown the effectiveness of Monte Carlo Tree Search (MCTS) for generating high-quality intermediate trajectories, particularly in math and symbolic domains. Inspired by this, we explore how MCTS-derived trajectories, traditionally used for training value or reward models, can be repurposed to improve policy optimization in preference-based reinforcement learning (RL). Specifically, we focus on Group Relative Policy Optimization (GRPO), a recent algorithm that enables preference-consistent policy learning without value networks. We propose a staged GRPO training paradigm where completions are derived from partially revealed MCTS rollouts, introducing a novel tree-structured setting for advantage estimation. This leads to a rich class of prefix-conditioned reward signals, which we analyze theoretically and empirically. Our initial results indicate that while structured advantage estimation can stabilize updates and better reflect compositional reasoning quality, challenges such as advantage saturation and reward signal collapse remain. We propose heuristic and statistical solutions to mitigate these issues and discuss open challenges for learning under staged or tree-like reward structures.

Country of Origin
🇨🇳 🇩🇪 Germany, China

Page Count
22 pages

Category
Computer Science:
Artificial Intelligence