Score: 4

Difficulty-Based Preference Data Selection by DPO Implicit Reward Gap

Published: August 6, 2025 | arXiv ID: 2508.04149v1

By: Xuan Qi, Rongwu Xu, Zhijing Jin

BigTech Affiliations: University of Washington

Potential Business Impact:

Chooses smart examples to teach AI better.

Aligning large language models (LLMs) with human preferences is a critical challenge in AI research. While methods like Reinforcement Learning from Human Feedback (RLHF) and Direct Preference Optimization (DPO) are widely used, they often rely on large, costly preference datasets. The current work lacks methods for high-quality data selection specifically for preference data. In this work, we introduce a novel difficulty-based data selection strategy for preference datasets, grounded in the DPO implicit reward mechanism. By selecting preference data examples with smaller DPO implicit reward gaps, which are indicative of more challenging cases, we improve data efficiency and model alignment. Our approach consistently outperforms five strong baselines across multiple datasets and alignment tasks, achieving superior performance with only 10\% of the original data. This principled, efficient selection method offers a promising solution for scaling LLM alignment with limited resources.

Country of Origin
πŸ‡¨πŸ‡¦ πŸ‡ΊπŸ‡Έ πŸ‡¨πŸ‡³ Canada, China, United States

Repos / Data Links

Page Count
17 pages

Category
Computer Science:
Computation and Language