AdaMorph: Unified Motion Retargeting via Embodiment-Aware Adaptive Transformers
By: Haoyu Zhang , Shibo Jin , Lvsong Li and more
Potential Business Impact:
Makes robots copy human moves, even different ones.
Retargeting human motion to heterogeneous robots is a fundamental challenge in robotics, primarily due to the severe kinematic and dynamic discrepancies between varying embodiments. Existing solutions typically resort to training embodiment-specific models, which scales poorly and fails to exploit shared motion semantics. To address this, we present AdaMorph, a unified neural retargeting framework that enables a single model to adapt human motion to diverse robot morphologies. Our approach treats retargeting as a conditional generation task. We map human motion into a morphology-agnostic latent intent space and utilize a dual-purpose prompting mechanism to condition the generation. Instead of simple input concatenation, we leverage Adaptive Layer Normalization (AdaLN) to dynamically modulate the decoder's feature space based on embodiment constraints. Furthermore, we enforce physical plausibility through a curriculum-based training objective that ensures orientation and trajectory consistency via integration. Experimental results on 12 distinct humanoid robots demonstrate that AdaMorph effectively unifies control across heterogeneous topologies, exhibiting strong zero-shot generalization to unseen complex motions while preserving the dynamic essence of the source behaviors.
Similar Papers
Towards Adaptable Humanoid Control via Adaptive Motion Tracking
Robotics
Robots copy human moves from one example.
G-DReaM: Graph-conditioned Diffusion Retargeting across Multiple Embodiments
Robotics
Lets robots copy human moves, even different ones.
OmniRetarget: Interaction-Preserving Data Generation for Humanoid Whole-Body Loco-Manipulation and Scene Interaction
Robotics
Robots learn parkour and object skills from human moves.