VisualSpeaker: Visually-Guided 3D Avatar Lip Synthesis
By: Alexandre Symeonidis-Herzig, Özge Mercanoğlu Sincan, Richard Bowden
Potential Business Impact:
Makes computer faces talk and move realistically.
Realistic, high-fidelity 3D facial animations are crucial for expressive avatar systems in human-computer interaction and accessibility. Although prior methods show promising quality, their reliance on the mesh domain limits their ability to fully leverage the rapid visual innovations seen in 2D computer vision and graphics. We propose VisualSpeaker, a novel method that bridges this gap using photorealistic differentiable rendering, supervised by visual speech recognition, for improved 3D facial animation. Our contribution is a perceptual lip-reading loss, derived by passing photorealistic 3D Gaussian Splatting avatar renders through a pre-trained Visual Automatic Speech Recognition model during training. Evaluation on the MEAD dataset demonstrates that VisualSpeaker improves both the standard Lip Vertex Error metric by 56.1% and the perceptual quality of the generated animations, while retaining the controllability of mesh-driven animation. This perceptual focus naturally supports accurate mouthings, essential cues that disambiguate similar manual signs in sign language avatars.
Similar Papers
Supervising 3D Talking Head Avatars with Analysis-by-Audio-Synthesis
Graphics
Makes computer faces talk and show feelings.
Perceptually Accurate 3D Talking Head Generation: New Definitions, Speech-Mesh Representation, and Evaluation Metrics
Graphics
Makes talking avatars' mouths move correctly with speech.
Audio Driven Real-Time Facial Animation for Social Telepresence
Graphics
Makes virtual faces talk and move like real people.