Empowering Multi-Turn Tool-Integrated Reasoning with Group Turn Policy Optimization
By: Yifeng Ding , Hung Le , Songyang Han and more
Potential Business Impact:
Teaches AI to solve math problems step-by-step.
Training Large Language Models (LLMs) for multi-turn Tool-Integrated Reasoning (TIR) - where models iteratively reason, generate code, and verify through execution - remains challenging for existing reinforcement learning (RL) approaches. Current RL methods, exemplified by Group Relative Policy Optimization (GRPO), suffer from coarse-grained, trajectory-level rewards that provide insufficient learning signals for complex multi-turn interactions, leading to training stagnation. To address this issue, we propose Group Turn Policy Optimization (GTPO), a novel RL algorithm specifically designed for training LLMs on multi-turn TIR tasks. GTPO introduces three key innovations: (1) turn-level reward assignment that provides fine-grained feedback for individual turns, (2) return-based advantage estimation where normalized discounted returns are calculated as advantages, and (3) self-supervised reward shaping that exploits self-supervision signals from generated code to densify sparse binary outcome-based rewards. Our comprehensive evaluation demonstrates that GTPO outperforms GRPO by 3.0% on average across diverse reasoning benchmarks, establishing its effectiveness for advancing complex mathematical reasoning in the real world.
Similar Papers
Training Task Reasoning LLM Agents for Multi-turn Task Planning via Single-turn Reinforcement Learning
Machine Learning (CS)
Teaches AI to plan long tasks better, faster.
Information Gain-based Policy Optimization: A Simple and Effective Approach for Multi-Turn LLM Agents
Computation and Language
Teaches AI to learn better from each step.
Stronger Together: On-Policy Reinforcement Learning for Collaborative LLMs
Machine Learning (CS)
Teaches AI to work together better for harder tasks.