SGDPO: Self-Guided Direct Preference Optimization for Language Model Alignment
By: Wenqiao Zhu , Ji Liu , Lulu Wang and more
Potential Business Impact:
Makes AI understand what you like better.
Direct Preference Optimization (DPO) is broadly utilized for aligning Large Language Models (LLMs) with human values because of its flexibility. Despite its effectiveness, it has been observed that the capability of DPO to generate human-preferred response is limited and the results of DPO are far from resilient. To address these limitations, in this paper we propose a novel Self-Guided Direct Preference Optimization algorithm, i.e., SGDPO, which incorporates a pilot term to steer the gradient flow during the optimization process, allowing for fine-grained control over the updates of chosen and rejected rewards. We provide a detailed theoretical analysis of our proposed method and elucidate its operational mechanism. Furthermore, we conduct comprehensive experiments on various models and benchmarks. The extensive experimental results demonstrate the consistency between the empirical results and our theoretical analysis and confirm the effectiveness of our proposed approach (up to 9.19% higher score).
Similar Papers
A Survey of Direct Preference Optimization
Machine Learning (CS)
Teaches computers to be helpful and safe.
BPO: Revisiting Preference Modeling in Direct Preference Optimization
Computation and Language
Makes AI better at math and following instructions.
Pre-DPO: Improving Data Utilization in Direct Preference Optimization Using a Guiding Reference Model
Computation and Language
Makes AI learn better from what people like.