Preference Distillation via Value based Reinforcement Learning
By: Minchan Kwon , Junwon Ko , Kangil Kim and more
Potential Business Impact:
Teaches small AI to learn better from examples.
Direct Preference Optimization (DPO) is a powerful paradigm to align language models with human preferences using pairwise comparisons. However, its binary win-or-loss supervision often proves insufficient for training small models with limited capacity. Prior works attempt to distill information from large teacher models using behavior cloning or KL divergence. These methods often focus on mimicking current behavior and overlook distilling reward modeling. To address this issue, we propose \textit{Teacher Value-based Knowledge Distillation} (TVKD), which introduces an auxiliary reward from the value function of the teacher model to provide a soft guide. This auxiliary reward is formulated to satisfy potential-based reward shaping, ensuring that the global reward structure and optimal policy of DPO are preserved. TVKD can be integrated into the standard DPO training framework and does not require additional rollouts. Our experimental results show that TVKD consistently improves performance across various benchmarks and model sizes.
Similar Papers
Active Learning for Direct Preference Optimization
Machine Learning (CS)
Teaches AI to learn faster from human choices.
A Survey of Direct Preference Optimization
Machine Learning (CS)
Teaches computers to be helpful and safe.
daDPO: Distribution-Aware DPO for Distilling Conversational Abilities
Machine Learning (CS)
Makes small AI models talk as well as big ones.