Distribution Matching Distillation Meets Reinforcement Learning
By: Dengyang Jiang , Dongyang Liu , Zanyi Wang and more
Potential Business Impact:
Makes AI pictures faster and better than before.
Distribution Matching Distillation (DMD) distills a pre-trained multi-step diffusion model to a few-step one to improve inference efficiency. However, the performance of the latter is often capped by the former. To circumvent this dilemma, we propose DMDR, a novel framework that combines Reinforcement Learning (RL) techniques into the distillation process. We show that for the RL of the few-step generator, the DMD loss itself is a more effective regularization compared to the traditional ones. In turn, RL can help to guide the mode coverage process in DMD more effectively. These allow us to unlock the capacity of the few-step generator by conducting distillation and RL simultaneously. Meanwhile, we design the dynamic distribution guidance and dynamic renoise sampling training strategies to improve the initial distillation process. The experiments demonstrate that DMDR can achieve leading visual quality, prompt coherence among few-step methods, and even exhibit performance that exceeds the multi-step teacher.
Similar Papers
Flash-DMD: Towards High-Fidelity Few-Step Image Generation with Efficient Distillation and Joint Reinforcement Learning
CV and Pattern Recognition
Makes AI art faster and better.
Adversarial Distribution Matching for Diffusion Distillation Towards Efficient Image and Video Synthesis
CV and Pattern Recognition
Makes AI create better pictures and videos faster.
Phased DMD: Few-step Distribution Matching Distillation via Score Matching within Subintervals
CV and Pattern Recognition
Makes AI create better videos and pictures.