Score: 1

Lightning Fast Caching-based Parallel Denoising Prediction for Accelerating Talking Head Generation

Published: August 25, 2025 | arXiv ID: 2509.00052v1

By: Jianzhi Long , Wenhao Sun , Rongcheng Tu and more

Potential Business Impact:

Makes talking videos create much faster.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Diffusion-based talking head models generate high-quality, photorealistic videos but suffer from slow inference, limiting practical applications. Existing acceleration methods for general diffusion models fail to exploit the temporal and spatial redundancies unique to talking head generation. In this paper, we propose a task-specific framework addressing these inefficiencies through two key innovations. First, we introduce Lightning-fast Caching-based Parallel denoising prediction (LightningCP), caching static features to bypass most model layers in inference time. We also enable parallel prediction using cached features and estimated noisy latents as inputs, efficiently bypassing sequential sampling. Second, we propose Decoupled Foreground Attention (DFA) to further accelerate attention computations, exploiting the spatial decoupling in talking head videos to restrict attention to dynamic foreground regions. Additionally, we remove reference features in certain layers to bring extra speedup. Extensive experiments demonstrate that our framework significantly improves inference speed while preserving video quality.

Country of Origin
πŸ‡¦πŸ‡Ί πŸ‡ΈπŸ‡¬ Singapore, Australia

Page Count
8 pages

Category
Computer Science:
Graphics