Score: 0

RL from Physical Feedback: Aligning Large Motion Models with Humanoid Control

Published: June 15, 2025 | arXiv ID: 2506.12769v1

By: Junpeng Yue , Zepeng Wang , Yuxuan Wang and more

Potential Business Impact:

Robots learn new moves from written instructions.

Business Areas:
Motion Capture Media and Entertainment, Video

This paper focuses on a critical challenge in robotics: translating text-driven human motions into executable actions for humanoid robots, enabling efficient and cost-effective learning of new behaviors. While existing text-to-motion generation methods achieve semantic alignment between language and motion, they often produce kinematically or physically infeasible motions unsuitable for real-world deployment. To bridge this sim-to-real gap, we propose Reinforcement Learning from Physical Feedback (RLPF), a novel framework that integrates physics-aware motion evaluation with text-conditioned motion generation. RLPF employs a motion tracking policy to assess feasibility in a physics simulator, generating rewards for fine-tuning the motion generator. Furthermore, RLPF introduces an alignment verification module to preserve semantic fidelity to text instructions. This joint optimization ensures both physical plausibility and instruction alignment. Extensive experiments show that RLPF greatly outperforms baseline methods in generating physically feasible motions while maintaining semantic correspondence with text instruction, enabling successful deployment on real humanoid robots.

Country of Origin
🇨🇳 China

Page Count
15 pages

Category
Computer Science:
Robotics