UMI-on-Air: Embodiment-Aware Guidance for Embodiment-Agnostic Visuomotor Policies
By: Harsh Gupta , Xiaofeng Guo , Huy Ha and more
Potential Business Impact:
Teaches robots to grab things in new ways.
We introduce UMI-on-Air, a framework for embodiment-aware deployment of embodiment-agnostic manipulation policies. Our approach leverages diverse, unconstrained human demonstrations collected with a handheld gripper (UMI) to train generalizable visuomotor policies. A central challenge in transferring these policies to constrained robotic embodiments-such as aerial manipulators-is the mismatch in control and robot dynamics, which often leads to out-of-distribution behaviors and poor execution. To address this, we propose Embodiment-Aware Diffusion Policy (EADP), which couples a high-level UMI policy with a low-level embodiment-specific controller at inference time. By integrating gradient feedback from the controller's tracking cost into the diffusion sampling process, our method steers trajectory generation towards dynamically feasible modes tailored to the deployment embodiment. This enables plug-and-play, embodiment-aware trajectory adaptation at test time. We validate our approach on multiple long-horizon and high-precision aerial manipulation tasks, showing improved success rates, efficiency, and robustness under disturbances compared to unguided diffusion baselines. Finally, we demonstrate deployment in previously unseen environments, using UMI demonstrations collected in the wild, highlighting a practical pathway for scaling generalizable manipulation skills across diverse-and even highly constrained-embodiments. All code, data, and checkpoints will be publicly released after acceptance. Result videos can be found at umi-on-air.github.io.
Similar Papers
MV-UMI: A Scalable Multi-View Interface for Cross-Embodiment Learning
Robotics
Robots learn better from more camera views.
ActiveUMI: Robotic Manipulation with Active Perception from Robot-Free Human Demonstrations
Robotics
Teaches robots to do tasks by watching humans.
Advances on Affordable Hardware Platforms for Human Demonstration Acquisition in Agricultural Applications
Robotics
Teaches robots to pick fruit by watching.