Rethinking Video Super-Resolution: Towards Diffusion-Based Methods without Motion Alignment
By: Zhihao Zhan , Wang Pang , Xiang Zhu and more
Potential Business Impact:
Makes blurry videos super clear without extra work.
In this work, we rethink the approach to video super-resolution by introducing a method based on the Diffusion Posterior Sampling framework, combined with an unconditional video diffusion transformer operating in latent space. The video generation model, a diffusion transformer, functions as a space-time model. We argue that a powerful model, which learns the physics of the real world, can easily handle various kinds of motion patterns as prior knowledge, thus eliminating the need for explicit estimation of optical flows or motion parameters for pixel alignment. Furthermore, a single instance of the proposed video diffusion transformer model can adapt to different sampling conditions without re-training. Empirical results on synthetic and real-world datasets illustrate the feasibility of diffusion-based, alignment-free video super-resolution.
Similar Papers
Survey of Video Diffusion Models: Foundations, Implementations, and Applications
CV and Pattern Recognition
Makes computers create realistic videos from text.
Inference-Time Text-to-Video Alignment with Diffusion Latent Beam Search
CV and Pattern Recognition
Makes AI videos move more naturally and realistically.
Hierarchical Flow Diffusion for Efficient Frame Interpolation
CV and Pattern Recognition
Makes videos smoother and faster to create.