PanoHair: Detailed Hair Strand Synthesis on Volumetric Heads
By: Shashikant Verma, Shanmuganathan Raman
Potential Business Impact:
Creates realistic digital hair much faster.
Achieving realistic hair strand synthesis is essential for creating lifelike digital humans, but producing high-fidelity hair strand geometry remains a significant challenge. Existing methods require a complex setup for data acquisition, involving multi-view images captured in constrained studio environments. Additionally, these methods have longer hair volume estimation and strand synthesis times, which hinder efficiency. We introduce PanoHair, a model that estimates head geometry as signed distance fields using knowledge distillation from a pre-trained generative teacher model for head synthesis. Our approach enables the prediction of semantic segmentation masks and 3D orientations specifically for the hair region of the estimated geometry. Our method is generative and can generate diverse hairstyles with latent space manipulations. For real images, our approach involves an inversion process to infer latent codes and produces visually appealing hair strands, offering a streamlined alternative to complex multi-view data acquisition setups. Given the latent code, PanoHair generates a clean manifold mesh for the hair region in under 5 seconds, along with semantic and orientation maps, marking a significant improvement over existing methods, as demonstrated in our experiments.
Similar Papers
Im2Haircut: Single-view Strand-based Hair Reconstruction for Human Avatars
CV and Pattern Recognition
Makes 3D hair models from one picture.
HairGS: Hair Strand Reconstruction based on 3D Gaussian Splatting
CV and Pattern Recognition
Makes computer hair look real from photos.
GeomHair: Reconstruction of Hair Strands from Colorless 3D Scans
CV and Pattern Recognition
Makes digital hair look real from 3D scans.