Versatile Transition Generation with Image-to-Video Diffusion
By: Zuhao Yang , Jiahui Zhang , Yingchen Yu and more
Potential Business Impact:
Creates smooth video transitions between scenes.
Leveraging text, images, structure maps, or motion trajectories as conditional guidance, diffusion models have achieved great success in automated and high-quality video generation. However, generating smooth and rational transition videos given the first and last video frames as well as descriptive text prompts is far underexplored. We present VTG, a Versatile Transition video Generation framework that can generate smooth, high-fidelity, and semantically coherent video transitions. VTG introduces interpolation-based initialization that helps preserve object identity and handle abrupt content changes effectively. In addition, it incorporates dual-directional motion fine-tuning and representation alignment regularization to mitigate the limitations of pre-trained image-to-video diffusion models in motion smoothness and generation fidelity, respectively. To evaluate VTG and facilitate future studies on unified transition generation, we collected TransitBench, a comprehensive benchmark for transition generation covering two representative transition tasks: concept blending and scene transition. Extensive experiments show that VTG achieves superior transition performance consistently across all four tasks.
Similar Papers
Progressive Image Restoration via Text-Conditioned Video Generation
CV and Pattern Recognition
Fixes blurry, dark, or low-quality pictures.
Bridging Text and Video Generation: A Survey
Graphics
Makes videos from written words.
Extrapolating and Decoupling Image-to-Video Generation Models: Motion Modeling is Easier Than You Think
CV and Pattern Recognition
Makes still pictures move with text instructions.