VisualMimic: Visual Humanoid Loco-Manipulation via Motion Tracking and Generation
By: Shaofeng Yin , Yanjie Ze , Hong-Xing Yu and more
Potential Business Impact:
Robots learn to move and grab like humans.
Humanoid loco-manipulation in unstructured environments demands tight integration of egocentric perception and whole-body control. However, existing approaches either depend on external motion capture systems or fail to generalize across diverse tasks. We introduce VisualMimic, a visual sim-to-real framework that unifies egocentric vision with hierarchical whole-body control for humanoid robots. VisualMimic combines a task-agnostic low-level keypoint tracker -- trained from human motion data via a teacher-student scheme -- with a task-specific high-level policy that generates keypoint commands from visual and proprioceptive input. To ensure stable training, we inject noise into the low-level policy and clip high-level actions using human motion statistics. VisualMimic enables zero-shot transfer of visuomotor policies trained in simulation to real humanoid robots, accomplishing a wide range of loco-manipulation tasks such as box lifting, pushing, football dribbling, and kicking. Beyond controlled laboratory settings, our policies also generalize robustly to outdoor environments. Videos are available at: https://visualmimic.github.io .
Similar Papers
ResMimic: From General Motion Tracking to Humanoid Whole-body Loco-Manipulation via Residual Learning
Robotics
Robots learn to move and grab things precisely.
ResMimic: From General Motion Tracking to Humanoid Whole-body Loco-Manipulation via Residual Learning
Robotics
Robots learn to move and grab things precisely.
BeyondMimic: From Motion Tracking to Versatile Humanoid Control via Guided Diffusion
Robotics
Robots learn to copy human moves for new tasks.