Score: 0

Rethinking Video Super-Resolution: Towards Diffusion-Based Methods without Motion Alignment

Published: March 5, 2025 | arXiv ID: 2503.03355v4

By: Zhihao Zhan , Wang Pang , Xiang Zhu and more

Potential Business Impact:

Makes blurry videos super clear without extra work.

Business Areas:
Autonomous Vehicles Transportation

In this work, we rethink the approach to video super-resolution by introducing a method based on the Diffusion Posterior Sampling framework, combined with an unconditional video diffusion transformer operating in latent space. The video generation model, a diffusion transformer, functions as a space-time model. We argue that a powerful model, which learns the physics of the real world, can easily handle various kinds of motion patterns as prior knowledge, thus eliminating the need for explicit estimation of optical flows or motion parameters for pixel alignment. Furthermore, a single instance of the proposed video diffusion transformer model can adapt to different sampling conditions without re-training. Empirical results on synthetic and real-world datasets illustrate the feasibility of diffusion-based, alignment-free video super-resolution.

Country of Origin
🇨🇳 China

Page Count
7 pages

Category
Computer Science:
CV and Pattern Recognition