LeVERB: Humanoid Whole-Body Control with Latent Vision-Language Instruction
By: Haoru Xue , Xiaoyu Huang , Dantong Niu and more
Potential Business Impact:
Robots learn to do many new tasks by watching.
Vision-language-action (VLA) models have demonstrated strong semantic understanding and zero-shot generalization, yet most existing systems assume an accurate low-level controller with hand-crafted action "vocabulary" such as end-effector pose or root velocity. This assumption confines prior work to quasi-static tasks and precludes the agile, whole-body behaviors required by humanoid whole-body control (WBC) tasks. To capture this gap in the literature, we start by introducing the first sim-to-real-ready, vision-language, closed-loop benchmark for humanoid WBC, comprising over 150 tasks from 10 categories. We then propose LeVERB: Latent Vision-Language-Encoded Robot Behavior, a hierarchical latent instruction-following framework for humanoid vision-language WBC, the first of its kind. At the top level, a vision-language policy learns a latent action vocabulary from synthetically rendered kinematic demonstrations; at the low level, a reinforcement-learned WBC policy consumes these latent verbs to generate dynamics-level commands. In our benchmark, LeVERB can zero-shot attain a 80% success rate on simple visual navigation tasks, and 58.5% success rate overall, outperforming naive hierarchical whole-body VLA implementation by 7.8 times.
Similar Papers
LangWBC: Language-directed Humanoid Whole-Body Control via End-to-end Learning
Robotics
Robots understand words, move bodies like people.
WholeBodyVLA: Towards Unified Latent VLA for Whole-Body Loco-Manipulation Control
Robotics
Robots can now reach and grab things anywhere.
A Vision-Language-Action-Critic Model for Robotic Real-World Reinforcement Learning
Robotics
Helps robots learn tasks faster and better.