When Human Preferences Flip: An Instance-Dependent Robust Loss for RLHF
By: Yifan Xu , Xichen Ye , Yifan Chen and more
Potential Business Impact:
Fixes AI mistakes from bad human feedback.
Quality of datasets plays an important role in large language model (LLM) alignment. In collecting human feedback, however, preference flipping is ubiquitous and causes corruption in data annotation; the issue necessitates the alignment algorithms with improved robustness against potential flipped pairs. To this end, this paper introduces a Flipping-Aware Direct Preference Optimization (FA-DPO) algorithm tailored to preference flipping from a reinforcement learning with human feedback (RLHF) perspective. We dissect the inherent human intention model and the preference flipping mechanism introduced by external factors as two distinct stages; in the latter, we introduce an instance-dependent flipping probability on the basis of the Bradley-Terry (BT) model. Further, by leveraging features relevant to preference annotation, we capture uncertainty in judgments and model preference flipping patterns. In practice, we design a simple yet efficient iterative optimization algorithm compatible with the original RLHF and DPO algorithms. In our experiments, we investigate the instance-dependent preference flipping model under multiple circumstances for evaluation of our proposed method, as well as other baseline methods.
Similar Papers
Direct Preference Optimization with Unobserved Preference Heterogeneity: The Necessity of Ternary Preferences
Artificial Intelligence
Teaches AI to understand many different opinions.
Difficulty-Based Preference Data Selection by DPO Implicit Reward Gap
Computation and Language
Chooses smart examples to teach AI better.
Provably Mitigating Corruption, Overoptimization, and Verbosity Simultaneously in Offline and Online RLHF/DPO Alignment
Machine Learning (CS)
Makes AI better by fixing its mistakes.