HairFormer: Transformer-Based Dynamic Neural Hair Simulation
By: Joy Xiaoji Zhang , Jingsen Zhu , Hanyu Chen and more
Potential Business Impact:
Makes computer hair move like real hair.
Simulating hair dynamics that generalize across arbitrary hairstyles, body shapes, and motions is a critical challenge. Our novel two-stage neural solution is the first to leverage Transformer-based architectures for such a broad generalization. We propose a Transformer-powered static network that predicts static draped shapes for any hairstyle, effectively resolving hair-body penetrations and preserving hair fidelity. Subsequently, a dynamic network with a novel cross-attention mechanism fuses static hair features with kinematic input to generate expressive dynamics and complex secondary motions. This dynamic network also allows for efficient fine-tuning of challenging motion sequences, such as abrupt head movements. Our method offers real-time inference for both static single-frame drapes and dynamic drapes over pose sequences. Our method demonstrates high-fidelity and generalizable dynamic hair across various styles, guided by physics-informed losses, and can resolve penetrations even for complex, unseen long hairstyles, highlighting its broad generalization.
Similar Papers
Neuralocks: Real-Time Dynamic Neural Hair Simulation
Graphics
Makes virtual hair move realistically in games.
ControlHair: Physically-based Video Diffusion for Controllable Dynamic Hair Rendering
Graphics
Makes computer-generated hair move realistically.
DGH: Dynamic Gaussian Hair
CV and Pattern Recognition
Makes computer hair move and look real.