Time-Correlated Video Bridge Matching
By: Viacheslav Vasilev , Arseny Ivanov , Nikita Gushchin and more
Potential Business Impact:
Makes videos look smoother and more real.
Diffusion models excel in noise-to-data generation tasks, providing a mapping from a Gaussian distribution to a more complex data distribution. However they struggle to model translations between complex distributions, limiting their effectiveness in data-to-data tasks. While Bridge Matching (BM) models address this by finding the translation between data distributions, their application to time-correlated data sequences remains unexplored. This is a critical limitation for video generation and manipulation tasks, where maintaining temporal coherence is particularly important. To address this gap, we propose Time-Correlated Video Bridge Matching (TCVBM), a framework that extends BM to time-correlated data sequences in the video domain. TCVBM explicitly models inter-sequence dependencies within the diffusion bridge, directly incorporating temporal correlations into the sampling process. We compare our approach to classical methods based on bridge matching and diffusion models for three video-related tasks: frame interpolation, image-to-video generation, and video super-resolution. TCVBM achieves superior performance across multiple quantitative metrics, demonstrating enhanced generation quality and reconstruction fidelity.
Similar Papers
Vision Bridge Transformer at Scale
CV and Pattern Recognition
Edits pictures and videos with simple instructions.
Time-to-Move: Training-Free Motion Controlled Video Generation via Dual-Clock Denoising
CV and Pattern Recognition
Makes videos move exactly how you want.
Versatile Transition Generation with Image-to-Video Diffusion
CV and Pattern Recognition
Creates smooth video transitions between scenes.