daDPO: Distribution-Aware DPO for Distilling Conversational Abilities
By: Zhengze Zhang , Shiqi Wang , Yiqun Shen and more
Potential Business Impact:
Makes small AI models talk as well as big ones.
Large language models (LLMs) have demonstrated exceptional performance across various applications, but their conversational abilities decline sharply as model size decreases, presenting a barrier to their deployment in resource-constrained environments. Knowledge distillation with Direct Preference Optimization (dDPO) has emerged as a promising approach to enhancing the conversational abilities of smaller models using a larger teacher model. However, current methods primarily focus on 'black-box' KD, which only uses the teacher's responses, overlooking the output distribution offered by the teacher. This paper addresses this gap by introducing daDPO (Distribution-Aware DPO), a unified method for preference optimization and distribution-based distillation. We provide rigorous theoretical analysis and empirical validation, showing that daDPO outperforms existing methods in restoring performance for pruned models and enhancing smaller LLM models. Notably, in in-domain evaluation, our method enables a 20% pruned Vicuna1.5-7B to achieve near-teacher performance (-7.3% preference rate compared to that of dDPO's -31%), and allows Qwen2.5-1.5B to occasionally outperform its 7B teacher model (14.0% win rate).
Similar Papers
Preference Distillation via Value based Reinforcement Learning
Computation and Language
Teaches small AI to learn better from examples.
BPO: Revisiting Preference Modeling in Direct Preference Optimization
Computation and Language
Makes AI better at math and following instructions.
A Survey of Direct Preference Optimization
Machine Learning (CS)
Teaches computers to be helpful and safe.