Score: 0

Model See Model Do: Speech-Driven Facial Animation with Style Control

Published: May 2, 2025 | arXiv ID: 2505.01319v2

By: Yifang Pan, Karan Singh, Luiz Gustavo Hafemann

Potential Business Impact:

Makes cartoon faces talk and show feelings.

Business Areas:
Motion Capture Media and Entertainment, Video

Speech-driven 3D facial animation plays a key role in applications such as virtual avatars, gaming, and digital content creation. While existing methods have made significant progress in achieving accurate lip synchronization and generating basic emotional expressions, they often struggle to capture and effectively transfer nuanced performance styles. We propose a novel example-based generation framework that conditions a latent diffusion model on a reference style clip to produce highly expressive and temporally coherent facial animations. To address the challenge of accurately adhering to the style reference, we introduce a novel conditioning mechanism called style basis, which extracts key poses from the reference and additively guides the diffusion generation process to fit the style without compromising lip synchronization quality. This approach enables the model to capture subtle stylistic cues while ensuring that the generated animations align closely with the input speech. Extensive qualitative, quantitative, and perceptual evaluations demonstrate the effectiveness of our method in faithfully reproducing the desired style while achieving superior lip synchronization across various speech scenarios.

Country of Origin
🇨🇦 Canada

Page Count
10 pages

Category
Computer Science:
Graphics