Controllable Expressive 3D Facial Animation via Diffusion in a Unified Multimodal Space
By: Kangwei Liu , Junwu Liu , Xiaowei Yi and more
Potential Business Impact:
Makes cartoon faces show real feelings from sound.
Audio-driven emotional 3D facial animation encounters two significant challenges: (1) reliance on single-modal control signals (videos, text, or emotion labels) without leveraging their complementary strengths for comprehensive emotion manipulation, and (2) deterministic regression-based mapping that constrains the stochastic nature of emotional expressions and non-verbal behaviors, limiting the expressiveness of synthesized animations. To address these challenges, we present a diffusion-based framework for controllable expressive 3D facial animation. Our approach introduces two key innovations: (1) a FLAME-centered multimodal emotion binding strategy that aligns diverse modalities (text, audio, and emotion labels) through contrastive learning, enabling flexible emotion control from multiple signal sources, and (2) an attention-based latent diffusion model with content-aware attention and emotion-guided layers, which enriches motion diversity while maintaining temporal coherence and natural facial dynamics. Extensive experiments demonstrate that our method outperforms existing approaches across most metrics, achieving a 21.6\% improvement in emotion similarity while preserving physiologically plausible facial dynamics. Project Page: https://kangweiiliu.github.io/Control_3D_Animation.
Similar Papers
EmoDiffusion: Enhancing Emotional 3D Facial Animation with Latent Diffusion Models
CV and Pattern Recognition
Makes computer faces show real feelings when they talk.
ID-Consistent, Precise Expression Generation with Blendshape-Guided Diffusion
CV and Pattern Recognition
Changes faces to show any emotion perfectly.
Model See Model Do: Speech-Driven Facial Animation with Style Control
Graphics
Makes cartoon faces talk and show feelings.