TeRA: Rethinking Text-guided Realistic 3D Avatar Generation
By: Yanwen Wang , Yiyu Zhuang , Jiawei Zhang and more
Potential Business Impact:
Creates realistic 3D people from text descriptions.
In this paper, we rethink text-to-avatar generative models by proposing TeRA, a more efficient and effective framework than the previous SDS-based models and general large 3D generative models. Our approach employs a two-stage training strategy for learning a native 3D avatar generative model. Initially, we distill a decoder to derive a structured latent space from a large human reconstruction model. Subsequently, a text-controlled latent diffusion model is trained to generate photorealistic 3D human avatars within this latent space. TeRA enhances the model performance by eliminating slow iterative optimization and enables text-based partial customization through a structured 3D human representation. Experiments have proven our approach's superiority over previous text-to-avatar generative models in subjective and objective evaluation.
Similar Papers
Dream3DAvatar: Text-Controlled 3D Avatar Reconstruction from a Single Image
CV and Pattern Recognition
Makes 3D characters from one picture.
ViSA: 3D-Aware Video Shading for Real-Time Upper-Body Avatar Creation
CV and Pattern Recognition
Creates realistic 3D people from one picture.
ViSA: 3D-Aware Video Shading for Real-Time Upper-Body Avatar Creation
CV and Pattern Recognition
Creates realistic 3D characters from photos.