Pixel3DMM: Versatile Screen-Space Priors for Single-Image 3D Face Reconstruction
By: Simon Giebenhain , Tobias Kirschstein , Martin Rünz and more
Potential Business Impact:
Makes 2D photos look like 3D faces.
We address the 3D reconstruction of human faces from a single RGB image. To this end, we propose Pixel3DMM, a set of highly-generalized vision transformers which predict per-pixel geometric cues in order to constrain the optimization of a 3D morphable face model (3DMM). We exploit the latent features of the DINO foundation model, and introduce a tailored surface normal and uv-coordinate prediction head. We train our model by registering three high-quality 3D face datasets against the FLAME mesh topology, which results in a total of over 1,000 identities and 976K images. For 3D face reconstruction, we propose a FLAME fitting opitmization that solves for the 3DMM parameters from the uv-coordinate and normal estimates. To evaluate our method, we introduce a new benchmark for single-image face reconstruction, which features high diversity facial expressions, viewing angles, and ethnicities. Crucially, our benchmark is the first to evaluate both posed and neutral facial geometry. Ultimately, our method outperforms the most competitive baselines by over 15% in terms of geometric accuracy for posed facial expressions.
Similar Papers
Common3D: Self-Supervised Learning of 3D Morphable Models for Common Objects in Neural Feature Space
CV and Pattern Recognition
Teaches computers to see objects in 3D from videos.
Hierarchical MLANet: Multi-level Attention for 3D Face Reconstruction From Single Images
CV and Pattern Recognition
Makes 3D faces from regular photos.
Leveraging 2D Masked Reconstruction for Domain Adaptation of 3D Pose Estimation
CV and Pattern Recognition
Teaches computers to guess body poses from any picture.