ReDiF: Reinforced Distillation for Few Step Diffusion
By: Amirhossein Tighkhorshid , Zahra Dehghanian , Gholamali Aminian and more
Potential Business Impact:
Teaches AI to create pictures much faster.
Distillation addresses the slow sampling problem in diffusion models by creating models with smaller size or fewer steps that approximate the behavior of high-step teachers. In this work, we propose a reinforcement learning based distillation framework for diffusion models. Instead of relying on fixed reconstruction or consistency losses, we treat the distillation process as a policy optimization problem, where the student is trained using a reward signal derived from alignment with the teacher's outputs. This RL driven approach dynamically guides the student to explore multiple denoising paths, allowing it to take longer, optimized steps toward high-probability regions of the data distribution, rather than relying on incremental refinements. Our framework utilizes the inherent ability of diffusion models to handle larger steps and effectively manage the generative process. Experimental results show that our method achieves superior performance with significantly fewer inference steps and computational resources compared to existing distillation techniques. Additionally, the framework is model agnostic, applicable to any type of diffusion models with suitable reward functions, providing a general optimization paradigm for efficient diffusion learning.
Similar Papers
Distribution Matching Distillation Meets Reinforcement Learning
CV and Pattern Recognition
Makes AI pictures faster and better than before.
Flash-DMD: Towards High-Fidelity Few-Step Image Generation with Efficient Distillation and Joint Reinforcement Learning
CV and Pattern Recognition
Makes AI art faster and better.
Iterative Distillation for Reward-Guided Fine-Tuning of Diffusion Models in Biomolecular Design
Machine Learning (CS)
Designs new proteins and medicines faster.