Difficulty-Based Preference Data Selection by DPO Implicit Reward Gap
By: Xuan Qi, Rongwu Xu, Zhijing Jin
Potential Business Impact:
Chooses smart examples to teach AI better.
Aligning large language models (LLMs) with human preferences is a critical challenge in AI research. While methods like Reinforcement Learning from Human Feedback (RLHF) and Direct Preference Optimization (DPO) are widely used, they often rely on large, costly preference datasets. The current work lacks methods for high-quality data selection specifically for preference data. In this work, we introduce a novel difficulty-based data selection strategy for preference datasets, grounded in the DPO implicit reward mechanism. By selecting preference data examples with smaller DPO implicit reward gaps, which are indicative of more challenging cases, we improve data efficiency and model alignment. Our approach consistently outperforms five strong baselines across multiple datasets and alignment tasks, achieving superior performance with only 10\% of the original data. This principled, efficient selection method offers a promising solution for scaling LLM alignment with limited resources.
Similar Papers
Beyond Single: A Data Selection Principle for LLM Alignment via Fine-Grained Preference Signals
Machine Learning (CS)
Teaches AI to follow many different rules better.
Direct Preference Optimization with Unobserved Preference Heterogeneity: The Necessity of Ternary Preferences
Artificial Intelligence
Teaches AI to understand many different opinions.
When Data is the Algorithm: A Systematic Study and Curation of Preference Optimization Datasets
Computation and Language
Makes AI understand what you like better.