TurboDiffusion: Accelerating Video Diffusion Models by 100-200 Times
By: Jintao Zhang , Kaiwen Zheng , Kai Jiang and more
We introduce TurboDiffusion, a video generation acceleration framework that can speed up end-to-end diffusion generation by 100-200x while maintaining video quality. TurboDiffusion mainly relies on several components for acceleration: (1) Attention acceleration: TurboDiffusion uses low-bit SageAttention and trainable Sparse-Linear Attention (SLA) to speed up attention computation. (2) Step distillation: TurboDiffusion adopts rCM for efficient step distillation. (3) W8A8 quantization: TurboDiffusion quantizes model parameters and activations to 8 bits to accelerate linear layers and compress the model. In addition, TurboDiffusion incorporates several other engineering optimizations. We conduct experiments on the Wan2.2-I2V-14B-720P, Wan2.1-T2V-1.3B-480P, Wan2.1-T2V-14B-720P, and Wan2.1-T2V-14B-480P models. Experimental results show that TurboDiffusion achieves 100-200x speedup for video generation even on a single RTX 5090 GPU, while maintaining comparable video quality. The GitHub repository, which includes model checkpoints and easy-to-use code, is available at https://github.com/thu-ml/TurboDiffusion.
Similar Papers
StreamDiffusionV2: A Streaming System for Dynamic and Interactive Video Generation
CV and Pattern Recognition
Makes live videos change instantly as you create them.
PipeDiT: Accelerating Diffusion Transformers in Video Generation with Task Pipelining and Model Decoupling
CV and Pattern Recognition
Makes creating videos much faster.
Turbo2K: Towards Ultra-Efficient and High-Quality 2K Video Synthesis
CV and Pattern Recognition
Makes super clear videos much faster to create.