Learning Whole-Body Human-Humanoid Interaction from Human-Human Demonstrations
By: Wei-Jin Huang , Yue-Yi Zhang , Yi-Lin Wei and more
Enabling humanoid robots to physically interact with humans is a critical frontier, but progress is hindered by the scarcity of high-quality Human-Humanoid Interaction (HHoI) data. While leveraging abundant Human-Human Interaction (HHI) data presents a scalable alternative, we first demonstrate that standard retargeting fails by breaking the essential contacts. We address this with PAIR (Physics-Aware Interaction Retargeting), a contact-centric, two-stage pipeline that preserves contact semantics across morphology differences to generate physically consistent HHoI data. This high-quality data, however, exposes a second failure: conventional imitation learning policies merely mimic trajectories and lack interactive understanding. We therefore introduce D-STAR (Decoupled Spatio-Temporal Action Reasoner), a hierarchical policy that disentangles when to act from where to act. In D-STAR, Phase Attention (when) and a Multi-Scale Spatial module (where) are fused by the diffusion head to produce synchronized whole-body behaviors beyond mimicry. By decoupling these reasoning streams, our model learns robust temporal phases without being distracted by spatial noise, leading to responsive, synchronized collaboration. We validate our framework through extensive and rigorous simulations, demonstrating significant performance gains over baseline approaches and a complete, effective pipeline for learning complex whole-body interactions from HHI data.
Similar Papers
Learning to Generate Human-Human-Object Interactions from Textual Descriptions
CV and Pattern Recognition
Teaches computers to show people interacting with objects.
Decoupled Generative Modeling for Human-Object Interaction Synthesis
CV and Pattern Recognition
Makes robots move and interact realistically.
Efficient and Scalable Monocular Human-Object Interaction Motion Reconstruction
CV and Pattern Recognition
Robots learn to copy human actions from videos.