Pix2NPHM: Learning to Regress NPHM Reconstructions From a Single Image
By: Simon Giebenhain , Tobias Kirschstein , Liam Schoneveld and more
Neural Parametric Head Models (NPHMs) are a recent advancement over mesh-based 3d morphable models (3DMMs) to facilitate high-fidelity geometric detail. However, fitting NPHMs to visual inputs is notoriously challenging due to the expressive nature of their underlying latent space. To this end, we propose Pix2NPHM, a vision transformer (ViT) network that directly regresses NPHM parameters, given a single image as input. Compared to existing approaches, the neural parametric space allows our method to reconstruct more recognizable facial geometry and accurate facial expressions. For broad generalization, we exploit domain-specific ViTs as backbones, which are pretrained on geometric prediction tasks. We train Pix2NPHM on a mixture of 3D data, including a total of over 100K NPHM registrations that enable direct supervision in SDF space, and large-scale 2D video datasets, for which normal estimates serve as pseudo ground truth geometry. Pix2NPHM not only allows for 3D reconstructions at interactive frame rates, it is also possible to improve geometric fidelity by a subsequent inference-time optimization against estimated surface normals and canonical point maps. As a result, we achieve unprecedented face reconstruction quality that can run at scale on in-the-wild data.
Similar Papers
Parametric Gaussian Human Model: Generalizable Prior for Efficient and Realistic Human Avatar Modeling
CV and Pattern Recognition
Creates realistic people for games from one video.
ImHead: A Large-scale Implicit Morphable Model for Localized Head Modeling
CV and Pattern Recognition
Makes 3D faces look real and change expressions easily.
On the Use of Hierarchical Vision Foundation Models for Low-Cost Human Mesh Recovery and Pose Estimation
CV and Pattern Recognition
Makes computer models of people smaller, faster.