Model See Model Do: Speech-Driven Facial Animation with Style Control
By: Yifang Pan, Karan Singh, Luiz Gustavo Hafemann
Potential Business Impact:
Makes cartoon faces talk and show feelings.
Speech-driven 3D facial animation plays a key role in applications such as virtual avatars, gaming, and digital content creation. While existing methods have made significant progress in achieving accurate lip synchronization and generating basic emotional expressions, they often struggle to capture and effectively transfer nuanced performance styles. We propose a novel example-based generation framework that conditions a latent diffusion model on a reference style clip to produce highly expressive and temporally coherent facial animations. To address the challenge of accurately adhering to the style reference, we introduce a novel conditioning mechanism called style basis, which extracts key poses from the reference and additively guides the diffusion generation process to fit the style without compromising lip synchronization quality. This approach enables the model to capture subtle stylistic cues while ensuring that the generated animations align closely with the input speech. Extensive qualitative, quantitative, and perceptual evaluations demonstrate the effectiveness of our method in faithfully reproducing the desired style while achieving superior lip synchronization across various speech scenarios.
Similar Papers
StyleSpeaker: Audio-Enhanced Fine-Grained Style Modeling for Speech-Driven 3D Facial Animation
Multimedia
Makes talking faces move realistically for any person.
Controllable Expressive 3D Facial Animation via Diffusion in a Unified Multimodal Space
Multimedia
Makes cartoon faces show real feelings from sound.
EmoDiffusion: Enhancing Emotional 3D Facial Animation with Latent Diffusion Models
CV and Pattern Recognition
Makes computer faces show real feelings when they talk.