KeyFace: Expressive Audio-Driven Facial Animation for Long Sequences via KeyFrame Interpolation
By: Antoni Bigata , Michał Stypułkowski , Rodrigo Mira and more
Potential Business Impact:
Makes talking cartoon faces stay real for a long time.
Current audio-driven facial animation methods achieve impressive results for short videos but suffer from error accumulation and identity drift when extended to longer durations. Existing methods attempt to mitigate this through external spatial control, increasing long-term consistency but compromising the naturalness of motion. We propose KeyFace, a novel two-stage diffusion-based framework, to address these issues. In the first stage, keyframes are generated at a low frame rate, conditioned on audio input and an identity frame, to capture essential facial expressions and movements over extended periods of time. In the second stage, an interpolation model fills in the gaps between keyframes, ensuring smooth transitions and temporal coherence. To further enhance realism, we incorporate continuous emotion representations and handle a wide range of non-speech vocalizations (NSVs), such as laughter and sighs. We also introduce two new evaluation metrics for assessing lip synchronization and NSV generation. Experimental results show that KeyFace outperforms state-of-the-art methods in generating natural, coherent facial animations over extended durations, successfully encompassing NSVs and continuous emotions.
Similar Papers
KeyframeFace: From Text to Expressive Facial Keyframes
CV and Pattern Recognition
Makes computer faces show emotions from words.
KSDiff: Keyframe-Augmented Speech-Aware Dual-Path Diffusion for Facial Animation
Graphics
Makes talking videos look more real.
KeyVID: Keyframe-Aware Video Diffusion for Audio-Synchronized Visual Animation
CV and Pattern Recognition
Makes videos match sounds with fewer frames.