Do You Have Freestyle? Expressive Humanoid Locomotion via Audio Control
By: Zhe Li , Cheng Chi , Yangyang Wei and more
Humans intuitively move to sound, but current humanoid robots lack expressive improvisational capabilities, confined to predefined motions or sparse commands. Generating motion from audio and then retargeting it to robots relies on explicit motion reconstruction, leading to cascaded errors, high latency, and disjointed acoustic-actuation mapping. We propose RoboPerform, the first unified audio-to-locomotion framework that can directly generate music-driven dance and speech-driven co-speech gestures from audio. Guided by the core principle of "motion = content + style", the framework treats audio as implicit style signals and eliminates the need for explicit motion reconstruction. RoboPerform integrates a ResMoE teacher policy for adapting to diverse motion patterns and a diffusion-based student policy for audio style injection. This retargeting-free design ensures low latency and high fidelity. Experimental validation shows that RoboPerform achieves promising results in physical plausibility and audio alignment, successfully transforming robots into responsive performers capable of reacting to audio.
Similar Papers
From Language to Locomotion: Retargeting-free Humanoid Control via Motion Latent Guidance
Robotics
Robots walk better by understanding spoken words.
From Language to Locomotion: Retargeting-free Humanoid Control via Motion Latent Guidance
Robotics
Robots walk and move by just hearing words.
Learning Robot Manipulation from Audio World Models
Robotics
Helps robots understand sounds to do tasks better.