Score: 2

Cosh-DiT: Co-Speech Gesture Video Synthesis via Hybrid Audio-Visual Diffusion Transformers

Published: March 13, 2025 | arXiv ID: 2503.09942v1

By: Yasheng Sun , Zhiliang Xu , Hang Zhou and more

BigTech Affiliations: Baidu

Potential Business Impact:

Makes talking people move hands and faces realistically.

Business Areas:
Speech Recognition Data and Analytics, Software

Co-speech gesture video synthesis is a challenging task that requires both probabilistic modeling of human gestures and the synthesis of realistic images that align with the rhythmic nuances of speech. To address these challenges, we propose Cosh-DiT, a Co-speech gesture video system with hybrid Diffusion Transformers that perform audio-to-motion and motion-to-video synthesis using discrete and continuous diffusion modeling, respectively. First, we introduce an audio Diffusion Transformer (Cosh-DiT-A) to synthesize expressive gesture dynamics synchronized with speech rhythms. To capture upper body, facial, and hand movement priors, we employ vector-quantized variational autoencoders (VQ-VAEs) to jointly learn their dependencies within a discrete latent space. Then, for realistic video synthesis conditioned on the generated speech-driven motion, we design a visual Diffusion Transformer (Cosh-DiT-V) that effectively integrates spatial and temporal contexts. Extensive experiments demonstrate that our framework consistently generates lifelike videos with expressive facial expressions and natural, smooth gestures that align seamlessly with speech.

Country of Origin
πŸ‡―πŸ‡΅ πŸ‡ΈπŸ‡¬ πŸ‡¨πŸ‡³ China, Singapore, Japan

Page Count
19 pages

Category
Computer Science:
CV and Pattern Recognition