VASA-3D: Lifelike Audio-Driven Gaussian Head Avatars from a Single Image
By: Sicheng Xu , Guojun Chen , Jiaolong Yang and more
Potential Business Impact:
Makes 3D talking heads from one picture.
We propose VASA-3D, an audio-driven, single-shot 3D head avatar generator. This research tackles two major challenges: capturing the subtle expression details present in real human faces, and reconstructing an intricate 3D head avatar from a single portrait image. To accurately model expression details, VASA-3D leverages the motion latent of VASA-1, a method that yields exceptional realism and vividness in 2D talking heads. A critical element of our work is translating this motion latent to 3D, which is accomplished by devising a 3D head model that is conditioned on the motion latent. Customization of this model to a single image is achieved through an optimization framework that employs numerous video frames of the reference head synthesized from the input image. The optimization takes various training losses robust to artifacts and limited pose coverage in the generated training data. Our experiment shows that VASA-3D produces realistic 3D talking heads that cannot be achieved by prior art, and it supports the online generation of 512x512 free-viewpoint videos at up to 75 FPS, facilitating more immersive engagements with lifelike 3D avatars.
Similar Papers
ViSA: 3D-Aware Video Shading for Real-Time Upper-Body Avatar Creation
CV and Pattern Recognition
Creates realistic 3D people from one picture.
ViSA: 3D-Aware Video Shading for Real-Time Upper-Body Avatar Creation
CV and Pattern Recognition
Creates realistic 3D characters from photos.
GaussianHeadTalk: Wobble-Free 3D Talking Heads with Audio Driven Gaussian Splatting
CV and Pattern Recognition
Creates real-time talking avatars from sound.