Rethinking DPO: The Role of Rejected Responses in Preference Misalignment
By: Jay Hyeon Cho , JunHyeok Oh , Myunsoo Kim and more
Potential Business Impact:
Makes AI better at choosing good answers.
Direct Preference Optimization (DPO) is a simple and efficient framework that has attracted substantial attention. However, it often struggles to meet its primary objectives -- increasing the generation probability of chosen responses while reducing that of rejected responses -- due to the dominant influence of rejected responses on the loss function. This imbalance leads to suboptimal performance in promoting preferred responses. In this work, we systematically analyze the limitations of DPO and existing algorithms designed to achieve the objectives stated above. To address these limitations, we propose Bounded-DPO (BDPO), a novel method that bounds the influence of rejected responses while maintaining the original optimization structure of DPO. Through theoretical analysis and empirical evaluations, we demonstrate that BDPO achieves a balanced optimization of the chosen and rejected responses, outperforming existing algorithms.
Similar Papers
BPO: Revisiting Preference Modeling in Direct Preference Optimization
Computation and Language
Makes AI better at math and following instructions.
What Matters in Data for DPO?
Machine Learning (CS)
Makes AI better by focusing on good answers.
Inducing Robustness in a 2 Dimensional Direct Preference Optimization Paradigm
Artificial Intelligence
Makes AI understand what people like better.