Reinforcing Diffusion Models by Direct Group Preference Optimization
By: Yihong Luo, Tianyang Hu, Jing Tang
Potential Business Impact:
Trains AI to make better pictures much faster.
While reinforcement learning methods such as Group Relative Preference Optimization (GRPO) have significantly enhanced Large Language Models, adapting them to diffusion models remains challenging. In particular, GRPO demands a stochastic policy, yet the most cost-effective diffusion samplers are based on deterministic ODEs. Recent work addresses this issue by using inefficient SDE-based samplers to induce stochasticity, but this reliance on model-agnostic Gaussian noise leads to slow convergence. To resolve this conflict, we propose Direct Group Preference Optimization (DGPO), a new online RL algorithm that dispenses with the policy-gradient framework entirely. DGPO learns directly from group-level preferences, which utilize relative information of samples within groups. This design eliminates the need for inefficient stochastic policies, unlocking the use of efficient deterministic ODE samplers and faster training. Extensive results show that DGPO trains around 20 times faster than existing state-of-the-art methods and achieves superior performance on both in-domain and out-of-domain reward metrics. Code is available at https://github.com/Luo-Yihong/DGPO.
Similar Papers
Improving Reasoning for Diffusion Language Models via Group Diffusion Policy Optimization
Machine Learning (CS)
Teaches AI to solve math and code problems better.
Towards Self-Improvement of Diffusion Models via Group Preference Optimization
CV and Pattern Recognition
Makes AI pictures better by learning from groups.
Neighbor GRPO: Contrastive ODE Policy Optimization Aligns Flow Models
CV and Pattern Recognition
Makes AI art look more like what people want.