Score: 0

HairFormer: Transformer-Based Dynamic Neural Hair Simulation

Published: July 16, 2025 | arXiv ID: 2507.12600v1

By: Joy Xiaoji Zhang , Jingsen Zhu , Hanyu Chen and more

Potential Business Impact:

Makes computer hair move like real hair.

Simulating hair dynamics that generalize across arbitrary hairstyles, body shapes, and motions is a critical challenge. Our novel two-stage neural solution is the first to leverage Transformer-based architectures for such a broad generalization. We propose a Transformer-powered static network that predicts static draped shapes for any hairstyle, effectively resolving hair-body penetrations and preserving hair fidelity. Subsequently, a dynamic network with a novel cross-attention mechanism fuses static hair features with kinematic input to generate expressive dynamics and complex secondary motions. This dynamic network also allows for efficient fine-tuning of challenging motion sequences, such as abrupt head movements. Our method offers real-time inference for both static single-frame drapes and dynamic drapes over pose sequences. Our method demonstrates high-fidelity and generalizable dynamic hair across various styles, guided by physics-informed losses, and can resolve penetrations even for complex, unseen long hairstyles, highlighting its broad generalization.

Country of Origin
🇺🇸 United States

Page Count
11 pages

Category
Computer Science:
Graphics