See the Speaker: Crafting High-Resolution Talking Faces from Speech with Prior Guidance and Region Refinement
By: Jinting Wang , Jun Wang , Hei Victor Cheng and more
Potential Business Impact:
Makes talking faces from just sound.
Unlike existing methods that rely on source images as appearance references and use source speech to generate motion, this work proposes a novel approach that directly extracts information from the speech, addressing key challenges in speech-to-talking face. Specifically, we first employ a speech-to-face portrait generation stage, utilizing a speech-conditioned diffusion model combined with statistical facial prior and a sample-adaptive weighting module to achieve high-quality portrait generation. In the subsequent speech-driven talking face generation stage, we embed expressive dynamics such as lip movement, facial expressions, and eye movements into the latent space of the diffusion model and further optimize lip synchronization using a region-enhancement module. To generate high-resolution outputs, we integrate a pre-trained Transformer-based discrete codebook with an image rendering network, enhancing video frame details in an end-to-end manner. Experimental results demonstrate that our method outperforms existing approaches on the HDTF, VoxCeleb, and AVSpeech datasets. Notably, this is the first method capable of generating high-resolution, high-quality talking face videos exclusively from a single speech input.
Similar Papers
FantasyTalking: Realistic Talking Portrait Generation via Coherent Motion Synthesis
CV and Pattern Recognition
Makes still pictures talk and move like real people.
Mask-Free Audio-driven Talking Face Generation for Enhanced Visual Quality and Identity Preservation
CV and Pattern Recognition
Makes faces talk realistically from sound.
IMTalker: Efficient Audio-driven Talking Face Generation with Implicit Motion Transfer
CV and Pattern Recognition
Makes faces talk realistically from pictures.