DVPO: Distributional Value Modeling-based Policy Optimization for LLM Post-Training
By: Dingwei Zhu , Zhiheng Xi , Shihan Dou and more
Potential Business Impact:
Teaches AI to learn better from messy information.
Reinforcement learning (RL) has shown strong performance in LLM post-training, but real-world deployment often involves noisy or incomplete supervision. In such settings, complex and unreliable supervision signals can destabilize training and harm generalization. While existing approaches such as worst-case optimization (e.g., RFQI, CQL) and mean-based methods (e.g., PPO, GRPO) can improve stability, they often overlook generalization and may produce overly conservative policies, leading to uneven performance across diverse real scenarios. To this end, we introduce DVPO (Distributional Value Modeling with Risk-aware Policy Optimization), a new RL framework that combines conditional risk theory with distributional value modeling to better balance robustness and generalization. DVPO learns token-level value distributions to provide fine-grained supervision, and applies an asymmetric risk regularization to shape the distribution tails: it contracts the lower tail to dampen noisy negative deviations, while expanding the upper tail to preserve exploratory diversity. Across extensive experiments and analysis in multi-turn dialogue, math reasoning, and scientific QA, DVPO consistently outperforms PPO, GRPO, and robust Bellman-based PPO under noisy supervision, showing its potential for LLM post-training in the real-world.
Similar Papers
Lean and Mean: Decoupled Value Policy Optimization with Global Value Guidance
Machine Learning (CS)
Makes AI learn better, faster, and cheaper.
VRPO: Rethinking Value Modeling for Robust RL Training under Noisy Supervision
Machine Learning (CS)
Teaches AI to learn better from mistakes.
Ratio-Variance Regularized Policy Optimization for Efficient LLM Fine-tuning
Machine Learning (CS)
Helps AI learn better and faster from mistakes.