From Structure to Detail: Hierarchical Distillation for Efficient Diffusion Model
By: Hanbo Cheng , Peng Wang , Kaixiang Lei and more
Potential Business Impact:
Makes AI create detailed pictures much faster.
The inference latency of diffusion models remains a critical barrier to their real-time application. While trajectory-based and distribution-based step distillation methods offer solutions, they present a fundamental trade-off. Trajectory-based methods preserve global structure but act as a "lossy compressor", sacrificing high-frequency details. Conversely, distribution-based methods can achieve higher fidelity but often suffer from mode collapse and unstable training. This paper recasts them from independent paradigms into synergistic components within our novel Hierarchical Distillation (HD) framework. We leverage trajectory distillation not as a final generator, but to establish a structural ``sketch", providing a near-optimal initialization for the subsequent distribution-based refinement stage. This strategy yields an ideal initial distribution that enhances the ceiling of overall performance. To further improve quality, we introduce and refine the adversarial training process. We find standard discriminator structures are ineffective at refining an already high-quality generator. To overcome this, we introduce the Adaptive Weighted Discriminator (AWD), tailored for the HD pipeline. By dynamically allocating token weights, AWD focuses on local imperfections, enabling efficient detail refinement. Our approach demonstrates state-of-the-art performance across diverse tasks. On ImageNet $256\times256$, our single-step model achieves an FID of 2.26, rivaling its 250-step teacher. It also achieves promising results on the high-resolution text-to-image MJHQ benchmark, proving its generalizability. Our method establishes a robust new paradigm for high-fidelity, single-step diffusion models.
Similar Papers
Diffusion As Self-Distillation: End-to-End Latent Diffusion In One Model
CV and Pattern Recognition
Makes AI create pictures faster and better.
Learning Few-Step Diffusion Models by Trajectory Distribution Matching
CV and Pattern Recognition
Makes AI art and video creation much faster.
Image-Free Timestep Distillation via Continuous-Time Consistency with Trajectory-Sampled Pairs
CV and Pattern Recognition
Makes AI create pictures much faster.