AI killed the video star. Audio-driven diffusion model for expressive talking head generation
By: Baptiste Chopin , Tashvik Dhamija , Pranav Balaji and more
Potential Business Impact:
Makes faces talk and move like real people.
We propose Dimitra++, a novel framework for audio-driven talking head generation, streamlined to learn lip motion, facial expression, as well as head pose motion. Specifically, we propose a conditional Motion Diffusion Transformer (cMDT) to model facial motion sequences, employing a 3D representation. The cMDT is conditioned on two inputs: a reference facial image, which determines appearance, as well as an audio sequence, which drives the motion. Quantitative and qualitative experiments, as well as a user study on two widely employed datasets, i.e., VoxCeleb2 and CelebV-HQ, suggest that Dimitra++ is able to outperform existing approaches in generating realistic talking heads imparting lip motion, facial expression, and head pose.
Similar Papers
IMTalker: Efficient Audio-driven Talking Face Generation with Implicit Motion Transfer
CV and Pattern Recognition
Makes faces talk realistically from pictures.
Playmate2: Training-Free Multi-Character Audio-Driven Animation via Diffusion Transformer with Reward Feedback
CV and Pattern Recognition
Makes videos of people talking from sound.
StreamingTalker: Audio-driven 3D Facial Animation with Autoregressive Diffusion Model
CV and Pattern Recognition
Makes computer faces talk in real-time.