KeyVID: Keyframe-Aware Video Diffusion for Audio-Synchronized Visual Animation
By: Xingrui Wang , Jiang Liu , Ze Wang and more
Potential Business Impact:
Makes videos match sounds with fewer frames.
Generating video from various conditions, such as text, image, and audio, enables both spatial and temporal control, leading to high-quality generation results. Videos with dramatic motions often require a higher frame rate to ensure smooth motion. Currently, most audio-to-visual animation models use uniformly sampled frames from video clips. However, these uniformly sampled frames fail to capture significant key moments in dramatic motions at low frame rates and require significantly more memory when increasing the number of frames directly. In this paper, we propose KeyVID, a keyframe-aware audio-to-visual animation framework that significantly improves the generation quality for key moments in audio signals while maintaining computation efficiency. Given an image and an audio input, we first localize keyframe time steps from the audio. Then, we use a keyframe generator to generate the corresponding visual keyframes. Finally, we generate all intermediate frames using the motion interpolator. Through extensive experiments, we demonstrate that KeyVID significantly improves audio-video synchronization and video quality across multiple datasets, particularly for highly dynamic motions. The code is released in https://github.com/XingruiWang/KeyVID.
Similar Papers
I2V3D: Controllable image-to-video generation with 3D guidance
CV and Pattern Recognition
Turns still pictures into moving videos with control.
KeyFace: Expressive Audio-Driven Facial Animation for Long Sequences via KeyFrame Interpolation
CV and Pattern Recognition
Makes talking cartoon faces stay real for a long time.
Extending Visual Dynamics for Video-to-Music Generation
Multimedia
Makes videos match music's mood and rhythm.