Score: 0

VisualSpeaker: Visually-Guided 3D Avatar Lip Synthesis

Published: July 8, 2025 | arXiv ID: 2507.06060v2

By: Alexandre Symeonidis-Herzig, Özge Mercanoğlu Sincan, Richard Bowden

Potential Business Impact:

Makes computer faces talk and move realistically.

Business Areas:
Virtual World Community and Lifestyle, Media and Entertainment, Software

Realistic, high-fidelity 3D facial animations are crucial for expressive avatar systems in human-computer interaction and accessibility. Although prior methods show promising quality, their reliance on the mesh domain limits their ability to fully leverage the rapid visual innovations seen in 2D computer vision and graphics. We propose VisualSpeaker, a novel method that bridges this gap using photorealistic differentiable rendering, supervised by visual speech recognition, for improved 3D facial animation. Our contribution is a perceptual lip-reading loss, derived by passing photorealistic 3D Gaussian Splatting avatar renders through a pre-trained Visual Automatic Speech Recognition model during training. Evaluation on the MEAD dataset demonstrates that VisualSpeaker improves both the standard Lip Vertex Error metric by 56.1% and the perceptual quality of the generated animations, while retaining the controllability of mesh-driven animation. This perceptual focus naturally supports accurate mouthings, essential cues that disambiguate similar manual signs in sign language avatars.

Country of Origin
🇬🇧 United Kingdom

Page Count
11 pages

Category
Computer Science:
CV and Pattern Recognition