DYMO-Hair: Generalizable Volumetric Dynamics Modeling for Robot Hair Manipulation
By: Chengyang Zhao , Uksang Yoo , Arkadeep Narayan Chaudhury and more
Potential Business Impact:
Robots can now style any hair, even unseen styles.
Hair care is an essential daily activity, yet it remains inaccessible to individuals with limited mobility and challenging for autonomous robot systems due to the fine-grained physical structure and complex dynamics of hair. In this work, we present DYMO-Hair, a model-based robot hair care system. We introduce a novel dynamics learning paradigm that is suited for volumetric quantities such as hair, relying on an action-conditioned latent state editing mechanism, coupled with a compact 3D latent space of diverse hairstyles to improve generalizability. This latent space is pre-trained at scale using a novel hair physics simulator, enabling generalization across previously unseen hairstyles. Using the dynamics model with a Model Predictive Path Integral (MPPI) planner, DYMO-Hair is able to perform visual goal-conditioned hair styling. Experiments in simulation demonstrate that DYMO-Hair's dynamics model outperforms baselines on capturing local deformation for diverse, unseen hairstyles. DYMO-Hair further outperforms baselines in closed-loop hair styling tasks on unseen hairstyles, with an average of 22% lower final geometric error and 42% higher success rate than the state-of-the-art system. Real-world experiments exhibit zero-shot transferability of our system to wigs, achieving consistent success on challenging unseen hairstyles where the state-of-the-art system fails. Together, these results introduce a foundation for model-based robot hair care, advancing toward more generalizable, flexible, and accessible robot hair styling in unconstrained physical environments. More details are available on our project page: https://chengyzhao.github.io/DYMOHair-web/.
Similar Papers
ControlHair: Physically-based Video Diffusion for Controllable Dynamic Hair Rendering
Graphics
Makes computer-generated hair move realistically.
HairFormer: Transformer-Based Dynamic Neural Hair Simulation
Graphics
Makes computer hair move like real hair.
SRM-Hair: Single Image Head Mesh Reconstruction via 3D Morphable Hair
CV and Pattern Recognition
Creates realistic 3D hair from one picture.