Score: 0

ReDiF: Reinforced Distillation for Few Step Diffusion

Published: December 28, 2025 | arXiv ID: 2512.22802v1

By: Amirhossein Tighkhorshid , Zahra Dehghanian , Gholamali Aminian and more

Potential Business Impact:

Teaches AI to create pictures much faster.

Business Areas:
Water Purification Sustainability

Distillation addresses the slow sampling problem in diffusion models by creating models with smaller size or fewer steps that approximate the behavior of high-step teachers. In this work, we propose a reinforcement learning based distillation framework for diffusion models. Instead of relying on fixed reconstruction or consistency losses, we treat the distillation process as a policy optimization problem, where the student is trained using a reward signal derived from alignment with the teacher's outputs. This RL driven approach dynamically guides the student to explore multiple denoising paths, allowing it to take longer, optimized steps toward high-probability regions of the data distribution, rather than relying on incremental refinements. Our framework utilizes the inherent ability of diffusion models to handle larger steps and effectively manage the generative process. Experimental results show that our method achieves superior performance with significantly fewer inference steps and computational resources compared to existing distillation techniques. Additionally, the framework is model agnostic, applicable to any type of diffusion models with suitable reward functions, providing a general optimization paradigm for efficient diffusion learning.

Country of Origin
🇬🇧 United Kingdom

Page Count
15 pages

Category
Computer Science:
Machine Learning (CS)