When Preferences Diverge: Aligning Diffusion Models with Minority-Aware Adaptive DPO
By: Lingfan Zhang , Chen Liu , Chengming Xu and more
Potential Business Impact:
Helps computers make better pictures by learning from everyone.
In recent years, the field of image generation has witnessed significant advancements, particularly in fine-tuning methods that align models with universal human preferences. This paper explores the critical role of preference data in the training process of diffusion models, particularly in the context of Diffusion-DPO and its subsequent adaptations. We investigate the complexities surrounding universal human preferences in image generation, highlighting the subjective nature of these preferences and the challenges posed by minority samples in preference datasets. Through pilot experiments, we demonstrate the existence of minority samples and their detrimental effects on model performance. We propose Adaptive-DPO -- a novel approach that incorporates a minority-instance-aware metric into the DPO objective. This metric, which includes intra-annotator confidence and inter-annotator stability, distinguishes between majority and minority samples. We introduce an Adaptive-DPO loss function which improves the DPO loss in two ways: enhancing the model's learning of majority labels while mitigating the negative impact of minority samples. Our experiments demonstrate that this method effectively handles both synthetic minority data and real-world preference data, paving the way for more effective training methodologies in image generation tasks.
Similar Papers
Preference-Based Alignment of Discrete Diffusion Models
Machine Learning (CS)
Teaches AI to make better choices without rewards.
Towards Self-Improvement of Diffusion Models via Group Preference Optimization
CV and Pattern Recognition
Makes AI pictures better by learning from groups.
Diffusion-SDPO: Safeguarded Direct Preference Optimization for Diffusion Models
CV and Pattern Recognition
Makes AI art follow your words better.