R$^2$PO: Decoupling Training Trajectories from Inference Responses for LLM Reasoning
By: Jingchu Wang , Bingbing Xu , Yige Yuan and more
Potential Business Impact:
Makes AI smarter by training it better.
Reinforcement learning has become a central paradigm for improving LLM reasoning. However, existing methods use a single policy to produce both inference responses and training optimization trajectories. The objective conflict between generating stable inference responses and diverse training trajectories leads to insufficient exploration, which harms reasoning capability. In this paper, to address the problem, we propose R$^2$PO (Residual Rollout Policy Optimization), which introduces a lightweight Residual Rollout-Head atop the policy to decouple training trajectories from inference responses, enabling controlled trajectory diversification during training while keeping inference generation stable. Experiments across multiple benchmarks show that our method consistently outperforms baselines, achieving average accuracy gains of 3.1% on MATH-500 and 2.4% on APPS, while also reducing formatting errors and mitigating length bias for stable optimization. Our code is publicly available at https://github.com/RRPO-ARR/Code.
Similar Papers
RPO:Reinforcement Fine-Tuning with Partial Reasoning Optimization
Artificial Intelligence
Makes AI learn much faster and cheaper.
OptPO: Optimal Rollout Allocation for Test-time Policy Optimization
Machine Learning (CS)
Makes AI smarter by learning from its own mistakes.
Ratio-Variance Regularized Policy Optimization for Efficient LLM Fine-tuning
Machine Learning (CS)
Helps AI learn better and faster from mistakes.