3D Engine-ready Photorealistic Avatars via Dynamic Textures
By: Yifan Wang , Ivan Molodetskikh , Ondrej Texler and more
Potential Business Impact:
Creates realistic 3D people for games and movies.
As the digital and physical worlds become more intertwined, there has been a lot of interest in digital avatars that closely resemble their real-world counterparts. Current digitization methods used in 3D production pipelines require costly capture setups, making them impractical for mass usage among common consumers. Recent academic literature has found success in reconstructing humans from limited data using implicit representations (e.g., voxels used in NeRFs), which are able to produce impressive videos. However, these methods are incompatible with traditional rendering pipelines, making it difficult to use them in applications such as games. In this work, we propose an end-to-end pipeline that builds explicitly-represented photorealistic 3D avatars using standard 3D assets. Our key idea is the use of dynamically-generated textures to enhance the realism and visually mask deficiencies in the underlying mesh geometry. This allows for seamless integration with current graphics pipelines while achieving comparable visual quality to state-of-the-art 3D avatar generation methods.
Similar Papers
Text-based Animatable 3D Avatars with Morphable Model Alignment
CV and Pattern Recognition
Creates realistic talking 3D heads from text.
Towards High-fidelity 3D Talking Avatar with Personalized Dynamic Texture
CV and Pattern Recognition
Makes talking cartoon faces look super real.
AvatarTex: High-Fidelity Facial Texture Reconstruction from Single-Image Stylized Avatars
CV and Pattern Recognition
Creates realistic and artistic faces from one picture.