EgoMI: Learning Active Vision and Whole-Body Manipulation from Egocentric Human Demonstrations
By: Justin Yu , Yide Shentu , Di Wu and more
Potential Business Impact:
Robots learn to copy human actions better.
Imitation learning from human demonstrations offers a promising approach for robot skill acquisition, but egocentric human data introduces fundamental challenges due to the embodiment gap. During manipulation, humans actively coordinate head and hand movements, continuously reposition their viewpoint and use pre-action visual fixation search strategies to locate relevant objects. These behaviors create dynamic, task-driven head motions that static robot sensing systems cannot replicate, leading to a significant distribution shift that degrades policy performance. We present EgoMI (Egocentric Manipulation Interface), a framework that captures synchronized end-effector and active head trajectories during manipulation tasks, resulting in data that can be retargeted to compatible semi-humanoid robot embodiments. To handle rapid and wide-spanning head viewpoint changes, we introduce a memory-augmented policy that selectively incorporates historical observations. We evaluate our approach on a bimanual robot equipped with an actuated camera head and find that policies with explicit head-motion modeling consistently outperform baseline methods. Results suggest that coordinated hand-eye learning with EgoMI effectively bridges the human-robot embodiment gap for robust imitation learning on semi-humanoid embodiments. Project page: https://egocentric-manipulation-interface.github.io
Similar Papers
EMMA: Scaling Mobile Manipulation via Egocentric Human Data
Robotics
Teaches robots to do tasks using human moves.
Uni-Hand: Universal Hand Motion Forecasting in Egocentric Views
CV and Pattern Recognition
Finds exact moments hands touch objects.
OpenEgo: A Large-Scale Multimodal Egocentric Dataset for Dexterous Manipulation
CV and Pattern Recognition
Teaches robots to copy human hand movements.