Autoregressive Distillation of Diffusion Transformers
By: Yeongmin Kim , Sotiris Anagnostidis , Yuming Du and more
Potential Business Impact:
Makes AI draw pictures faster and better.
Diffusion models with transformer architectures have demonstrated promising capabilities in generating high-fidelity images and scalability for high resolution. However, iterative sampling process required for synthesis is very resource-intensive. A line of work has focused on distilling solutions to probability flow ODEs into few-step student models. Nevertheless, existing methods have been limited by their reliance on the most recent denoised samples as input, rendering them susceptible to exposure bias. To address this limitation, we propose AutoRegressive Distillation (ARD), a novel approach that leverages the historical trajectory of the ODE to predict future steps. ARD offers two key benefits: 1) it mitigates exposure bias by utilizing a predicted historical trajectory that is less susceptible to accumulated errors, and 2) it leverages the previous history of the ODE trajectory as a more effective source of coarse-grained information. ARD modifies the teacher transformer architecture by adding token-wise time embedding to mark each input from the trajectory history and employs a block-wise causal attention mask for training. Furthermore, incorporating historical inputs only in lower transformer layers enhances performance and efficiency. We validate the effectiveness of ARD in a class-conditioned generation on ImageNet and T2I synthesis. Our model achieves a $5\times$ reduction in FID degradation compared to the baseline methods while requiring only 1.1\% extra FLOPs on ImageNet-256. Moreover, ARD reaches FID of 1.84 on ImageNet-256 in merely 4 steps and outperforms the publicly available 1024p text-to-image distilled models in prompt adherence score with a minimal drop in FID compared to the teacher. Project page: https://github.com/alsdudrla10/ARD.
Similar Papers
AR-Diffusion: Asynchronous Video Generation with Auto-Regressive Diffusion
CV and Pattern Recognition
Makes videos that look real and flow smoothly.
Marrying Autoregressive Transformer and Diffusion with Multi-Reference Autoregression
CV and Pattern Recognition
Creates better pictures faster than before.
Playing with Transformer at 30+ FPS via Next-Frame Diffusion
CV and Pattern Recognition
Makes videos play super fast, like a movie.