Score: 0

When Human Preferences Flip: An Instance-Dependent Robust Loss for RLHF

Published: November 30, 2025 | arXiv ID: 2512.00709v1

By: Yifan Xu , Xichen Ye , Yifan Chen and more

Potential Business Impact:

Fixes AI mistakes from bad human feedback.

Business Areas:
A/B Testing Data and Analytics

Quality of datasets plays an important role in large language model (LLM) alignment. In collecting human feedback, however, preference flipping is ubiquitous and causes corruption in data annotation; the issue necessitates the alignment algorithms with improved robustness against potential flipped pairs. To this end, this paper introduces a Flipping-Aware Direct Preference Optimization (FA-DPO) algorithm tailored to preference flipping from a reinforcement learning with human feedback (RLHF) perspective. We dissect the inherent human intention model and the preference flipping mechanism introduced by external factors as two distinct stages; in the latter, we introduce an instance-dependent flipping probability on the basis of the Bradley-Terry (BT) model. Further, by leveraging features relevant to preference annotation, we capture uncertainty in judgments and model preference flipping patterns. In practice, we design a simple yet efficient iterative optimization algorithm compatible with the original RLHF and DPO algorithms. In our experiments, we investigate the instance-dependent preference flipping model under multiple circumstances for evaluation of our proposed method, as well as other baseline methods.

Country of Origin
🇭🇰 Hong Kong

Page Count
16 pages

Category
Computer Science:
Artificial Intelligence