LightCache: Memory-Efficient, Training-Free Acceleration for Video Generation
By: Yang Xiao , Gen Li , Kaiyuan Deng and more
Potential Business Impact:
Makes AI video creation faster and use less memory.
Training-free acceleration has emerged as an advanced research area in video generation based on diffusion models. The redundancy of latents in diffusion model inference provides a natural entry point for acceleration. In this paper, we decompose the inference process into the encoding, denoising, and decoding stages, and observe that cache-based acceleration methods often lead to substantial memory surges in the latter two stages. To address this problem, we analyze the characteristics of inference across different stages and propose stage-specific strategies for reducing memory consumption: 1) Asynchronous Cache Swapping. 2) Feature chunk. 3) Slicing latents to decode. At the same time, we ensure that the time overhead introduced by these three strategies remains lower than the acceleration gains themselves. Compared with the baseline, our approach achieves faster inference speed and lower memory usage, while maintaining quality degradation within an acceptable range. The Code is available at https://github.com/NKUShaw/LightCache .
Similar Papers
Toward Lightweight and Fast Decoders for Diffusion Models in Image and Video Generation
CV and Pattern Recognition
Makes AI create pictures and videos much faster.
MixCache: Mixture-of-Cache for Video Diffusion Transformer Acceleration
Graphics
Makes videos create faster without losing quality.
QuantCache: Adaptive Importance-Guided Quantization with Hierarchical Latent and Layer Caching for Video Generation
CV and Pattern Recognition
Makes video creation faster without losing quality.