Reflective Preference Optimization (RPO): Enhancing On-Policy Alignment via Hint-Guided Reflection
By: Zihui Zhao, Zechang Li
Direct Preference Optimization (DPO) has emerged as a lightweight and effective alternative to Reinforcement Learning from Human Feedback (RLHF) and Reinforcement Learning with AI Feedback (RLAIF) for aligning large language and vision-language models. However, the standard DPO formulation, in which both the chosen and rejected responses are generated by the same policy, suffers from a weak learning signal because the two responses often share similar errors and exhibit small Kullback-Leibler (KL) divergence. This leads to slow and unstable convergence. To address this limitation, we introduce Reflective Preference Optimization (RPO), a new framework that incorporates hint-guided reflection into the DPO paradigm. RPO uses external models to identify hallucination sources and generate concise reflective hints, enabling the construction of on-policy preference pairs with stronger contrastiveness and clearer preference signals. We theoretically show that conditioning on hints increases the expected preference margin through mutual information and improves sample efficiency while remaining within the policy distribution family. Empirically, RPO achieves superior alignment with fewer training samples and iterations, substantially reducing hallucination rates and delivering state-of-the-art performance across multimodal benchmarks.
Similar Papers
Direct Preference Optimization with Unobserved Preference Heterogeneity: The Necessity of Ternary Preferences
Artificial Intelligence
Teaches AI to understand many different opinions.
Difficulty-Based Preference Data Selection by DPO Implicit Reward Gap
Computation and Language
Chooses smart examples to teach AI better.
Pre-DPO: Improving Data Utilization in Direct Preference Optimization Using a Guiding Reference Model
Computation and Language
Makes AI learn better from what people like.