TPDiff: Temporal Pyramid Video Diffusion Model
By: Lingmin Ran, Mike Zheng Shou
Potential Business Impact:
Makes video creation faster and cheaper.
The development of video diffusion models unveils a significant challenge: the substantial computational demands. To mitigate this challenge, we note that the reverse process of diffusion exhibits an inherent entropy-reducing nature. Given the inter-frame redundancy in video modality, maintaining full frame rates in high-entropy stages is unnecessary. Based on this insight, we propose TPDiff, a unified framework to enhance training and inference efficiency. By dividing diffusion into several stages, our framework progressively increases frame rate along the diffusion process with only the last stage operating on full frame rate, thereby optimizing computational efficiency. To train the multi-stage diffusion model, we introduce a dedicated training framework: stage-wise diffusion. By solving the partitioned probability flow ordinary differential equations (ODE) of diffusion under aligned data and noise, our training strategy is applicable to various diffusion forms and further enhances training efficiency. Comprehensive experimental evaluations validate the generality of our method, demonstrating 50% reduction in training cost and 1.5x improvement in inference efficiency.
Similar Papers
DiffuseSlide: Training-Free High Frame Rate Video Generation Diffusion
CV and Pattern Recognition
Makes slow videos look super smooth and fast.
Dynamical Diffusion: Learning Temporal Dynamics with Diffusion Models
Machine Learning (CS)
Makes videos and predictions flow naturally in time.
Hierarchical Flow Diffusion for Efficient Frame Interpolation
CV and Pattern Recognition
Makes videos smoother and faster to create.