HairFormer: Transformer-Based Dynamic Neural Hair Simulation
By: Joy Xiaoji Zhang , Jingsen Zhu , Hanyu Chen and more
Potential Business Impact:
Makes computer hair move like real hair.
Simulating hair dynamics that generalize across arbitrary hairstyles, body shapes, and motions is a critical challenge. Our novel two-stage neural solution is the first to leverage Transformer-based architectures for such a broad generalization. We propose a Transformer-powered static network that predicts static draped shapes for any hairstyle, effectively resolving hair-body penetrations and preserving hair fidelity. Subsequently, a dynamic network with a novel cross-attention mechanism fuses static hair features with kinematic input to generate expressive dynamics and complex secondary motions. This dynamic network also allows for efficient fine-tuning of challenging motion sequences, such as abrupt head movements. Our method offers real-time inference for both static single-frame drapes and dynamic drapes over pose sequences. Our method demonstrates high-fidelity and generalizable dynamic hair across various styles, guided by physics-informed losses, and can resolve penetrations even for complex, unseen long hairstyles, highlighting its broad generalization.
Similar Papers
ControlHair: Physically-based Video Diffusion for Controllable Dynamic Hair Rendering
Graphics
Makes computer-generated hair move realistically.
DYMO-Hair: Generalizable Volumetric Dynamics Modeling for Robot Hair Manipulation
Robotics
Robots can now style any hair, even unseen styles.
MUT3R: Motion-aware Updating Transformer for Dynamic 3D Reconstruction
CV and Pattern Recognition
Fixes wobbly 3D pictures from moving cameras.