Lightweight Robust Direct Preference Optimization
By: Cheol Woo Kim , Shresth Verma , Mauricio Tec and more
Potential Business Impact:
Makes AI learn better from messy human feedback.
Direct Preference Optimization (DPO) has become a popular method for fine-tuning large language models (LLMs) due to its stability and simplicity. However, it is also known to be sensitive to noise in the data and prone to overfitting. Recent works have proposed using distributionally robust optimization (DRO) to address potential noise and distributional shift in the data. However, these methods often suffer from excessive conservatism and high computational cost. We propose DPO-PRO (DPO with Preference Robustness), a robust fine-tuning algorithm based on DPO which accounts for uncertainty in the preference distribution through a lightweight DRO formulation. Unlike prior DRO-based variants, DPO-PRO focuses solely on uncertainty in preferences, avoiding unnecessary conservatism and incurring negligible computational overhead. We further show that DPO-PRO is equivalent to a regularized DPO objective that penalizes model overconfidence under weak preference signals. We evaluate DPO-PRO on standard alignment benchmarks and a real-world public health task. Experimental results show that our method consistently improves robustness to noisy preference signals compared to existing DPO variants.
Similar Papers
Preference Robustness for DPO with Applications to Public Health
Machine Learning (CS)
Helps AI make better health plans from simple words.
A Survey of Direct Preference Optimization
Machine Learning (CS)
Teaches computers to be helpful and safe.
Margin Adaptive DPO: Leveraging Reward Model for Granular Control in Preference Optimization
Machine Learning (CS)
Teaches AI to write better by learning from mistakes.