ControlHair: Physically-based Video Diffusion for Controllable Dynamic Hair Rendering
By: Weikai Lin, Haoxiang Li, Yuhao Zhu
Potential Business Impact:
Makes computer-generated hair move realistically.
Hair simulation and rendering are challenging due to complex strand dynamics, diverse material properties, and intricate light-hair interactions. Recent video diffusion models can generate high-quality videos, but they lack fine-grained control over hair dynamics. We present ControlHair, a hybrid framework that integrates a physics simulator with conditional video diffusion to enable controllable dynamic hair rendering. ControlHair adopts a three-stage pipeline: it first encodes physics parameters (e.g., hair stiffness, wind) into per-frame geometry using a simulator, then extracts per-frame control signals, and finally feeds control signals into a video diffusion model to generate videos with desired hair dynamics. This cascaded design decouples physics reasoning from video generation, supports diverse physics, and makes training the video diffusion model easy. Trained on a curated 10K video dataset, ControlHair outperforms text- and pose-conditioned baselines, delivering precisely controlled hair dynamics. We further demonstrate three use cases of ControlHair: dynamic hairstyle try-on, bullet-time effects, and cinemagraphic. ControlHair introduces the first physics-informed video diffusion framework for controllable dynamics. We provide a teaser video and experimental results on our website.
Similar Papers
HairFormer: Transformer-Based Dynamic Neural Hair Simulation
Graphics
Makes computer hair move like real hair.
DYMO-Hair: Generalizable Volumetric Dynamics Modeling for Robot Hair Manipulation
Robotics
Robots can now style any hair, even unseen styles.
PhysCtrl: Generative Physics for Controllable and Physics-Grounded Video Generation
CV and Pattern Recognition
Makes videos move realistically, like real objects.