Commanding Humanoid by Free-form Language: A Large Language Action Model with Unified Motion Vocabulary
By: Zhirui Liu , Kaiyang Ji , Ke Yang and more
Potential Business Impact:
Robots understand and do what you say.
Enabling humanoid robots to follow free-form language commands is critical for seamless human-robot interaction, collaborative task execution, and general-purpose embodied intelligence. While recent advances have improved low-level humanoid locomotion and robot manipulation, language-conditioned whole-body control remains a significant challenge. Existing methods are often limited to simple instructions and sacrifice either motion diversity or physical plausibility. To address this, we introduce Humanoid-LLA, a Large Language Action Model that maps expressive language commands to physically executable whole-body actions for humanoid robots. Our approach integrates three core components: a unified motion vocabulary that aligns human and humanoid motion primitives into a shared discrete space; a vocabulary-directed controller distilled from a privileged policy to ensure physical feasibility; and a physics-informed fine-tuning stage using reinforcement learning with dynamics-aware rewards to enhance robustness and stability. Extensive evaluations in simulation and on a real-world Unitree G1 humanoid show that Humanoid-LLA delivers strong language generalization while maintaining high physical fidelity, outperforming existing language-conditioned controllers in motion naturalness, stability, and execution success rate.
Similar Papers
Architecting Large Action Models for Human-in-the-Loop Intelligent Robots
Robotics
Robots can learn to act safely by combining AI parts.
LangWBC: Language-directed Humanoid Whole-Body Control via End-to-end Learning
Robotics
Robots understand words, move bodies like people.
Quadrupped-Legged Robot Movement Plan Generation using Large Language Model
Robotics
Robots walk anywhere by just talking to them.