Enhancing LLM Reasoning with Iterative DPO: A Comprehensive Empirical Investigation
By: Songjun Tu , Jiahao Lin , Xiangyu Tian and more
Potential Business Impact:
Makes AI smarter with less computer power.
Recent advancements in post-training methodologies for large language models (LLMs) have highlighted reinforcement learning (RL) as a critical component for enhancing reasoning. However, the substantial computational costs associated with RL-based approaches have led to growing interest in alternative paradigms, such as Direct Preference Optimization (DPO). In this study, we investigate the effectiveness of DPO in facilitating self-improvement for LLMs through iterative preference-based learning. We demonstrate that a single round of DPO with coarse filtering significantly enhances mathematical reasoning performance, particularly for strong base model. Furthermore, we design an iterative enhancement framework for both the generator and the reward model (RM), enabling their mutual improvement through online interaction across multiple rounds of DPO. Finally, with simple verifiable rewards, our model DPO-VP achieves RL-level performance with significantly lower computational overhead. These findings highlight DPO as a scalable and cost-effective alternative to RL, offering a practical solution for enhancing LLM reasoning in resource-constrained situations.
Similar Papers
Exploring the Potential of Offline RL for Reasoning in LLMs: A Preliminary Study
Computation and Language
Makes AI smarter and cheaper to train.
A Survey of Direct Preference Optimization
Machine Learning (CS)
Teaches computers to be helpful and safe.
MDPO: Multi-Granularity Direct Preference Optimization for Mathematical Reasoning
Machine Learning (CS)
Makes computers better at solving math problems.