Score: 2

PVPO: Pre-Estimated Value-Based Policy Optimization for Agentic Reasoning

Published: August 28, 2025 | arXiv ID: 2508.21104v3

By: Wenfeng Feng , Penghong Zhao , Guochao Jiang and more

BigTech Affiliations: Alibaba

Potential Business Impact:

Teaches computers to learn tasks faster and better.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Critic-free reinforcement learning methods, particularly group policies, have attracted considerable attention for their efficiency in complex tasks. However, these methods rely heavily on multiple sampling and comparisons within the policy to estimate advantage, which may cause the policy to fall into local optimum and increase computational cost. To address these issues, we propose PVPO, an efficient reinforcement learning method enhanced by an advantage reference anchor and data pre-sampling. Specifically, we use the reference model to rollout in advance and employ the calculated reward score as a reference anchor. Our approach effectively corrects the cumulative bias introduced by intra-group comparisons and significantly reduces reliance on the number of rollouts during training. Meanwhile, the reference model can assess sample difficulty during data pre-sampling, enabling effective selection of high-gain data to improve training efficiency. Moreover, PVPO is orthogonal to other advanced critic-free RL algorithms, making it compatible with and complementary to these methods. Experiments conducted on nine datasets across two domains demonstrate that PVPO achieves State-Of-The-Art (SOTA) performance. Our approach not only demonstrates robust generalization across multiple tasks, but also exhibits scalable performance across models of varying scales.

Country of Origin
🇨🇳 China

Page Count
17 pages

Category
Computer Science:
Machine Learning (CS)