Text2Lip: Progressive Lip-Synced Talking Face Generation from Text via Viseme-Guided Rendering
By: Xu Wang , Shengeng Tang , Fei Wang and more
Potential Business Impact:
Makes any text speak with a realistic face.
Generating semantically coherent and visually accurate talking faces requires bridging the gap between linguistic meaning and facial articulation. Although audio-driven methods remain prevalent, their reliance on high-quality paired audio visual data and the inherent ambiguity in mapping acoustics to lip motion pose significant challenges in terms of scalability and robustness. To address these issues, we propose Text2Lip, a viseme-centric framework that constructs an interpretable phonetic-visual bridge by embedding textual input into structured viseme sequences. These mid-level units serve as a linguistically grounded prior for lip motion prediction. Furthermore, we design a progressive viseme-audio replacement strategy based on curriculum learning, enabling the model to gradually transition from real audio to pseudo-audio reconstructed from enhanced viseme features via cross-modal attention. This allows for robust generation in both audio-present and audio-free scenarios. Finally, a landmark-guided renderer synthesizes photorealistic facial videos with accurate lip synchronization. Extensive evaluations show that Text2Lip outperforms existing approaches in semantic fidelity, visual realism, and modality robustness, establishing a new paradigm for controllable and flexible talking face generation. Our project homepage is https://plyon1.github.io/Text2Lip/.
Similar Papers
Face2VoiceSync: Lightweight Face-Voice Consistency for Text-Driven Talking Face Generation
Sound
Makes faces talk with any voice.
Shared Latent Representation for Joint Text-to-Audio-Visual Synthesis
CV and Pattern Recognition
Makes talking robots look and sound real.
VSpeechLM: A Visual Speech Language Model for Visual Text-to-Speech Task
Multimedia
Makes videos talk with matching lip movements.