Conditional Video Generation for High-Efficiency Video Compression
By: Fangqiu Yi , Jingyu Xu , Jiawei Shao and more
Potential Business Impact:
Makes videos look better with less data.
Perceptual studies demonstrate that conditional diffusion models excel at reconstructing video content aligned with human visual perception. Building on this insight, we propose a video compression framework that leverages conditional diffusion models for perceptually optimized reconstruction. Specifically, we reframe video compression as a conditional generation task, where a generative model synthesizes video from sparse, yet informative signals. Our approach introduces three key modules: (1) Multi-granular conditioning that captures both static scene structure and dynamic spatio-temporal cues; (2) Compact representations designed for efficient transmission without sacrificing semantic richness; (3) Multi-condition training with modality dropout and role-aware embeddings, which prevent over-reliance on any single modality and enhance robustness. Extensive experiments show that our method significantly outperforms both traditional and neural codecs on perceptual quality metrics such as Fr\'echet Video Distance (FVD) and LPIPS, especially under high compression ratios.
Similar Papers
Generative Latent Diffusion for Efficient Spatiotemporal Data Reduction
Machine Learning (CS)
Saves space by smartly guessing missing video parts.
Higher fidelity perceptual image and video compression with a latent conditioned residual denoising diffusion model
Image and Video Processing
Makes pictures look good while keeping details.
REGEN: Learning Compact Video Embedding with (Re-)Generative Decoder
CV and Pattern Recognition
Makes videos smaller for faster creation.