Implicit Neural Representation for Video Restoration
By: Mary Aiyetigbo , Wanqi Yuan , Feng Luo and more
Potential Business Impact:
Makes blurry videos clear at any zoom.
High-resolution (HR) videos play a crucial role in many computer vision applications. Although existing video restoration (VR) methods can significantly enhance video quality by exploiting temporal information across video frames, they are typically trained for fixed upscaling factors and lack the flexibility to handle scales or degradations beyond their training distribution. In this paper, we introduce VR-INR, a novel video restoration approach based on Implicit Neural Representations (INRs) that is trained only on a single upscaling factor ($\times 4$) but generalizes effectively to arbitrary, unseen super-resolution scales at test time. Notably, VR-INR also performs zero-shot denoising on noisy input, despite never having seen noisy data during training. Our method employs a hierarchical spatial-temporal-texture encoding framework coupled with multi-resolution implicit hash encoding, enabling adaptive decoding of high-resolution and noise-suppressed frames from low-resolution inputs at any desired magnification. Experimental results show that VR-INR consistently maintains high-quality reconstructions at unseen scales and noise during training, significantly outperforming state-of-the-art approaches in sharpness, detail preservation, and denoising efficacy.
Similar Papers
Implicit Neural Representation for Video and Image Super-Resolution
CV and Pattern Recognition
Makes blurry pictures and videos sharp and clear.
SR-NeRV: Improving Embedding Efficiency of Neural Video Representation via Super-Resolution
Image and Video Processing
Makes videos look clearer with less data.
MSNeRV: Neural Video Representation with Multi-Scale Feature Fusion
CV and Pattern Recognition
Makes videos smaller without losing detail.