Score: 0

AI killed the video star. Audio-driven diffusion model for expressive talking head generation

Published: November 27, 2025 | arXiv ID: 2511.22488v1

By: Baptiste Chopin , Tashvik Dhamija , Pranav Balaji and more

Potential Business Impact:

Makes faces talk and move like real people.

Business Areas:
Speech Recognition Data and Analytics, Software

We propose Dimitra++, a novel framework for audio-driven talking head generation, streamlined to learn lip motion, facial expression, as well as head pose motion. Specifically, we propose a conditional Motion Diffusion Transformer (cMDT) to model facial motion sequences, employing a 3D representation. The cMDT is conditioned on two inputs: a reference facial image, which determines appearance, as well as an audio sequence, which drives the motion. Quantitative and qualitative experiments, as well as a user study on two widely employed datasets, i.e., VoxCeleb2 and CelebV-HQ, suggest that Dimitra++ is able to outperform existing approaches in generating realistic talking heads imparting lip motion, facial expression, and head pose.

Page Count
28 pages

Category
Computer Science:
CV and Pattern Recognition