Benchmarking Humanoid Imitation Learning with Motion Difficulty
By: Zhaorui Meng , Lu Yin , Xinrui Chen and more
Potential Business Impact:
Measures how hard robot moves are to copy.
Physics-based motion imitation is central to humanoid control, yet current evaluation metrics (e.g., joint position error) only measure how well a policy imitates but not how difficult the motion itself is. This conflates policy performance with motion difficulty, obscuring whether failures stem from poor learning or inherently challenging motions. In this work, we address this gap with Motion Difficulty Score (MDS), a novel metric that defines and quantifies imitation difficulty independent of policy performance. Grounded in rigid-body dynamics, MDS interprets difficulty as the torque variation induced by small pose perturbations: larger torque-to-pose variation yields flatter reward landscapes and thus higher learning difficulty. MDS captures this through three properties of the perturbation-induced torque space: volume, variance, and temporal variability. We also use it to construct MD-AMASS, a difficulty-aware repartitioning of the AMASS dataset. Empirically, we rigorously validate MDS by demonstrating its explanatory power on the performance of state-of-the-art motion imitation policies. We further demonstrate the utility of MDS through two new MDS-based metrics: Maximum Imitable Difficulty (MID) and Difficulty-Stratified Joint Error (DSJE), providing fresh insights into imitation learning.
Similar Papers
Object-Aware 4D Human Motion Generation
CV and Pattern Recognition
Makes videos of people move realistically with objects.
Multi-Domain Motion Embedding: Expressive Real-Time Mimicry for Legged Robots
Robotics
Robots learn to copy human and animal moves better.
Back to Basics: Motion Representation Matters for Human Motion Generation Using Diffusion Model
CV and Pattern Recognition
Makes computer-generated dancing look more real.