Score: 0

Rethinking DPO: The Role of Rejected Responses in Preference Misalignment

Published: June 15, 2025 | arXiv ID: 2506.12725v1

By: Jay Hyeon Cho , JunHyeok Oh , Myunsoo Kim and more

Potential Business Impact:

Makes AI better at choosing good answers.

Business Areas:
A/B Testing Data and Analytics

Direct Preference Optimization (DPO) is a simple and efficient framework that has attracted substantial attention. However, it often struggles to meet its primary objectives -- increasing the generation probability of chosen responses while reducing that of rejected responses -- due to the dominant influence of rejected responses on the loss function. This imbalance leads to suboptimal performance in promoting preferred responses. In this work, we systematically analyze the limitations of DPO and existing algorithms designed to achieve the objectives stated above. To address these limitations, we propose Bounded-DPO (BDPO), a novel method that bounds the influence of rejected responses while maintaining the original optimization structure of DPO. Through theoretical analysis and empirical evaluations, we demonstrate that BDPO achieves a balanced optimization of the chosen and rejected responses, outperforming existing algorithms.

Page Count
16 pages

Category
Computer Science:
Artificial Intelligence