Score: 0

MiVID: Multi-Strategic Self-Supervision for Video Frame Interpolation using Diffusion Model

Published: November 8, 2025 | arXiv ID: 2511.06019v1

By: Priyansh Srivastava , Romit Chatterjee , Abir Sen and more

Potential Business Impact:

Makes videos smoother by guessing missing frames.

Business Areas:
Image Recognition Data and Analytics, Software

Video Frame Interpolation (VFI) remains a cornerstone in video enhancement, enabling temporal upscaling for tasks like slow-motion rendering, frame rate conversion, and video restoration. While classical methods rely on optical flow and learning-based models assume access to dense ground-truth, both struggle with occlusions, domain shifts, and ambiguous motion. This article introduces MiVID, a lightweight, self-supervised, diffusion-based framework for video interpolation. Our model eliminates the need for explicit motion estimation by combining a 3D U-Net backbone with transformer-style temporal attention, trained under a hybrid masking regime that simulates occlusions and motion uncertainty. The use of cosine-based progressive masking and adaptive loss scheduling allows our network to learn robust spatiotemporal representations without any high-frame-rate supervision. Our framework is evaluated on UCF101-7 and DAVIS-7 datasets. MiVID is trained entirely on CPU using the datasets and 9-frame video segments, making it a low-resource yet highly effective pipeline. Despite these constraints, our model achieves optimal results at just 50 epochs, competitive with several supervised baselines.This work demonstrates the power of self-supervised diffusion priors for temporally coherent frame synthesis and provides a scalable path toward accessible and generalizable VFI systems.

Country of Origin
🇮🇳 India

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition