Score: 1

VASA-3D: Lifelike Audio-Driven Gaussian Head Avatars from a Single Image

Published: December 16, 2025 | arXiv ID: 2512.14677v1

By: Sicheng Xu , Guojun Chen , Jiaolong Yang and more

BigTech Affiliations: Microsoft

Potential Business Impact:

Makes 3D talking heads from one picture.

Business Areas:
Virtual World Community and Lifestyle, Media and Entertainment, Software

We propose VASA-3D, an audio-driven, single-shot 3D head avatar generator. This research tackles two major challenges: capturing the subtle expression details present in real human faces, and reconstructing an intricate 3D head avatar from a single portrait image. To accurately model expression details, VASA-3D leverages the motion latent of VASA-1, a method that yields exceptional realism and vividness in 2D talking heads. A critical element of our work is translating this motion latent to 3D, which is accomplished by devising a 3D head model that is conditioned on the motion latent. Customization of this model to a single image is achieved through an optimization framework that employs numerous video frames of the reference head synthesized from the input image. The optimization takes various training losses robust to artifacts and limited pose coverage in the generated training data. Our experiment shows that VASA-3D produces realistic 3D talking heads that cannot be achieved by prior art, and it supports the online generation of 512x512 free-viewpoint videos at up to 75 FPS, facilitating more immersive engagements with lifelike 3D avatars.

Country of Origin
🇺🇸 United States

Page Count
16 pages

Category
Computer Science:
CV and Pattern Recognition