UMIGen: A Unified Framework for Egocentric Point Cloud Generation and Cross-Embodiment Robotic Imitation Learning
By: Yan Huang , Shoujie Li , Xingting Li and more
Potential Business Impact:
Robots learn new tasks faster with less special gear.
Data-driven robotic learning faces an obvious dilemma: robust policies demand large-scale, high-quality demonstration data, yet collecting such data remains a major challenge owing to high operational costs, dependence on specialized hardware, and the limited spatial generalization capability of current methods. The Universal Manipulation Interface (UMI) relaxes the strict hardware requirements for data collection, but it is restricted to capturing only RGB images of a scene and omits the 3D geometric information on which many tasks rely. Inspired by DemoGen, we propose UMIGen, a unified framework that consists of two key components: (1) Cloud-UMI, a handheld data collection device that requires no visual SLAM and simultaneously records point cloud observation-action pairs; and (2) a visibility-aware optimization mechanism that extends the DemoGen pipeline to egocentric 3D observations by generating only points within the camera's field of view. These two components enable efficient data generation that aligns with real egocentric observations and can be directly transferred across different robot embodiments without any post-processing. Experiments in both simulated and real-world settings demonstrate that UMIGen supports strong cross-embodiment generalization and accelerates data collection in diverse manipulation tasks.
Similar Papers
MV-UMI: A Scalable Multi-View Interface for Cross-Embodiment Learning
Robotics
Robots learn better from more camera views.
ActiveUMI: Robotic Manipulation with Active Perception from Robot-Free Human Demonstrations
Robotics
Teaches robots to do tasks by watching humans.
IGen: Scalable Data Generation for Robot Learning from Open-World Images
Robotics
Teaches robots to do tasks using everyday pictures.