HairGS: Hair Strand Reconstruction based on 3D Gaussian Splatting
By: Yimin Pan, Matthias Nießner, Tobias Kirschstein
Potential Business Impact:
Makes computer hair look real from photos.
Human hair reconstruction is a challenging problem in computer vision, with growing importance for applications in virtual reality and digital human modeling. Recent advances in 3D Gaussians Splatting (3DGS) provide efficient and explicit scene representations that naturally align with the structure of hair strands. In this work, we extend the 3DGS framework to enable strand-level hair geometry reconstruction from multi-view images. Our multi-stage pipeline first reconstructs detailed hair geometry using a differentiable Gaussian rasterizer, then merges individual Gaussian segments into coherent strands through a novel merging scheme, and finally refines and grows the strands under photometric supervision. While existing methods typically evaluate reconstruction quality at the geometric level, they often neglect the connectivity and topology of hair strands. To address this, we propose a new evaluation metric that serves as a proxy for assessing topological accuracy in strand reconstruction. Extensive experiments on both synthetic and real-world datasets demonstrate that our method robustly handles a wide range of hairstyles and achieves efficient reconstruction, typically completing within one hour. The project page can be found at: https://yimin-pan.github.io/hair-gs/
Similar Papers
GeomHair: Reconstruction of Hair Strands from Colorless 3D Scans
CV and Pattern Recognition
Makes digital hair look real from 3D scans.
Im2Haircut: Single-view Strand-based Hair Reconstruction for Human Avatars
CV and Pattern Recognition
Makes 3D hair models from one picture.
DGH: Dynamic Gaussian Hair
CV and Pattern Recognition
Makes computer hair move and look real.