From Language to Locomotion: Retargeting-free Humanoid Control via Motion Latent Guidance
By: Zhe Li , Cheng Chi , Yangyang Wei and more
Potential Business Impact:
Robots walk better by understanding spoken words.
Natural language offers a natural interface for humanoid robots, but existing language-guided humanoid locomotion pipelines remain cumbersome and untrustworthy. They typically decode human motion, retarget it to robot morphology, and then track it with a physics-based controller. However, this multi-stage process is prone to cumulative errors, introduces high latency, and yields weak coupling between semantics and control. These limitations call for a more direct pathway from language to action, one that eliminates fragile intermediate stages. Therefore, we present RoboGhost, a retargeting-free framework that directly conditions humanoid policies on language-grounded motion latents. By bypassing explicit motion decoding and retargeting, RoboGhost enables a diffusion-based policy to denoise executable actions directly from noise, preserving semantic intent and supporting fast, reactive control. A hybrid causal transformer-diffusion motion generator further ensures long-horizon consistency while maintaining stability and diversity, yielding rich latent representations for precise humanoid behavior. Extensive experiments demonstrate that RoboGhost substantially reduces deployment latency, improves success rates and tracking precision, and produces smooth, semantically aligned locomotion on real humanoids. Beyond text, the framework naturally extends to other modalities such as images, audio, and music, providing a universal foundation for vision-language-action humanoid systems.
Similar Papers
From Language to Locomotion: Retargeting-free Humanoid Control via Motion Latent Guidance
Robotics
Robots walk and move by just hearing words.
Commanding Humanoid by Free-form Language: A Large Language Action Model with Unified Motion Vocabulary
Robotics
Robots understand and do what you say.
OmniRetarget: Interaction-Preserving Data Generation for Humanoid Whole-Body Loco-Manipulation and Scene Interaction
Robotics
Robots learn parkour and object skills from human moves.