Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution
By: Shijun Shi , Jing Xu , Lijing Lu and more
Potential Business Impact:
Makes blurry videos clear without weird glitches.
Existing diffusion-based video super-resolution (VSR) methods are susceptible to introducing complex degradations and noticeable artifacts into high-resolution videos due to their inherent randomness. In this paper, we propose a noise-robust real-world VSR framework by incorporating self-supervised learning and Mamba into pre-trained latent diffusion models. To ensure content consistency across adjacent frames, we enhance the diffusion model with a global spatio-temporal attention mechanism using the Video State-Space block with a 3D Selective Scan module, which reinforces coherence at an affordable computational cost. To further reduce artifacts in generated details, we introduce a self-supervised ControlNet that leverages HR features as guidance and employs contrastive learning to extract degradation-insensitive features from LR videos. Finally, a three-stage training strategy based on a mixture of HR-LR videos is proposed to stabilize VSR training. The proposed Self-supervised ControlNet with Spatio-Temporal Continuous Mamba based VSR algorithm achieves superior perceptual quality than state-of-the-arts on real-world VSR benchmark datasets, validating the effectiveness of the proposed model design and training strategies.
Similar Papers
MambaVSR: Content-Aware Scanning State Space Model for Video Super-Resolution
CV and Pattern Recognition
Makes blurry videos sharp and clear.
Trajectory-aware Shifted State Space Models for Online Video Super-Resolution
CV and Pattern Recognition
Makes blurry videos sharp using past frames.
A Separable Self-attention Inspired by the State Space Model for Computer Vision
CV and Pattern Recognition
Makes computers see pictures faster and better.