Score: 0

Controllable Expressive 3D Facial Animation via Diffusion in a Unified Multimodal Space

Published: April 14, 2025 | arXiv ID: 2506.10007v1

By: Kangwei Liu , Junwu Liu , Xiaowei Yi and more

Potential Business Impact:

Makes cartoon faces show real feelings from sound.

Business Areas:
Animation Media and Entertainment, Video

Audio-driven emotional 3D facial animation encounters two significant challenges: (1) reliance on single-modal control signals (videos, text, or emotion labels) without leveraging their complementary strengths for comprehensive emotion manipulation, and (2) deterministic regression-based mapping that constrains the stochastic nature of emotional expressions and non-verbal behaviors, limiting the expressiveness of synthesized animations. To address these challenges, we present a diffusion-based framework for controllable expressive 3D facial animation. Our approach introduces two key innovations: (1) a FLAME-centered multimodal emotion binding strategy that aligns diverse modalities (text, audio, and emotion labels) through contrastive learning, enabling flexible emotion control from multiple signal sources, and (2) an attention-based latent diffusion model with content-aware attention and emotion-guided layers, which enriches motion diversity while maintaining temporal coherence and natural facial dynamics. Extensive experiments demonstrate that our method outperforms existing approaches across most metrics, achieving a 21.6\% improvement in emotion similarity while preserving physiologically plausible facial dynamics. Project Page: https://kangweiiliu.github.io/Control_3D_Animation.

Country of Origin
🇨🇳 China

Page Count
6 pages

Category
Computer Science:
Multimedia