Taming Preference Mode Collapse via Directional Decoupling Alignment in Diffusion Reinforcement Learning
By: Chubin Chen , Sujie Hu , Jiashu Zhu and more
Recent studies have demonstrated significant progress in aligning text-to-image diffusion models with human preference via Reinforcement Learning from Human Feedback. However, while existing methods achieve high scores on automated reward metrics, they often lead to Preference Mode Collapse (PMC)-a specific form of reward hacking where models converge on narrow, high-scoring outputs (e.g., images with monolithic styles or pervasive overexposure), severely degrading generative diversity. In this work, we introduce and quantify this phenomenon, proposing DivGenBench, a novel benchmark designed to measure the extent of PMC. We posit that this collapse is driven by over-optimization along the reward model's inherent biases. Building on this analysis, we propose Directional Decoupling Alignment (D$^2$-Align), a novel framework that mitigates PMC by directionally correcting the reward signal. Specifically, our method first learns a directional correction within the reward model's embedding space while keeping the model frozen. This correction is then applied to the reward signal during the optimization process, preventing the model from collapsing into specific modes and thereby maintaining diversity. Our comprehensive evaluation, combining qualitative analysis with quantitative metrics for both quality and diversity, reveals that D$^2$-Align achieves superior alignment with human preference.
Similar Papers
Multi-dimensional Preference Alignment by Conditioning Reward Itself
CV and Pattern Recognition
Teaches AI to create better pictures by understanding different feedback.
Preference Alignment on Diffusion Model: A Comprehensive Survey for Image Generation and Editing
CV and Pattern Recognition
Makes AI create better pictures based on what people like.
Personalized Preference Fine-tuning of Diffusion Models
Machine Learning (CS)
Makes AI art match what *you* like.