MagCache: Fast Video Generation with Magnitude-Aware Cache
By: Zehong Ma , Longhui Wei , Feng Wang and more
Potential Business Impact:
Makes AI videos create faster, look better.
Existing acceleration techniques for video diffusion models often rely on uniform heuristics or time-embedding variants to skip timesteps and reuse cached features. These approaches typically require extensive calibration with curated prompts and risk inconsistent outputs due to prompt-specific overfitting. In this paper, we introduce a novel and robust discovery: a unified magnitude law observed across different models and prompts. Specifically, the magnitude ratio of successive residual outputs decreases monotonically and steadily in most timesteps while rapidly in the last several steps. Leveraging this insight, we introduce a Magnitude-aware Cache (MagCache) that adaptively skips unimportant timesteps using an error modeling mechanism and adaptive caching strategy. Unlike existing methods requiring dozens of curated samples for calibration, MagCache only requires a single sample for calibration. Experimental results show that MagCache achieves 2.1x and 2.68x speedups on Open-Sora and Wan 2.1, respectively, while preserving superior visual fidelity. It significantly outperforms existing methods in LPIPS, SSIM, and PSNR, under comparable computational budgets.
Similar Papers
Less is Enough: Training-Free Video Diffusion Acceleration via Runtime-Adaptive Caching
CV and Pattern Recognition
Makes making videos much faster and better.
TaoCache: Structure-Maintained Video Generation Acceleration
CV and Pattern Recognition
Makes AI art faster without losing quality.
LightCache: Memory-Efficient, Training-Free Acceleration for Video Generation
CV and Pattern Recognition
Makes AI video creation faster and use less memory.