Spatial RoboGrasp: Generalized Robotic Grasping Control Policy
By: Yiqi Huang , Travis Davies , Jiahuan Yan and more
Potential Business Impact:
Robots grasp objects better in new places.
Achieving generalizable and precise robotic manipulation across diverse environments remains a critical challenge, largely due to limitations in spatial perception. While prior imitation-learning approaches have made progress, their reliance on raw RGB inputs and handcrafted features often leads to overfitting and poor 3D reasoning under varied lighting, occlusion, and object conditions. In this paper, we propose a unified framework that couples robust multimodal perception with reliable grasp prediction. Our architecture fuses domain-randomized augmentation, monocular depth estimation, and a depth-aware 6-DoF Grasp Prompt into a single spatial representation for downstream action planning. Conditioned on this encoding and a high-level task prompt, our diffusion-based policy yields precise action sequences, achieving up to 40% improvement in grasp success and 45% higher task success rates under environmental variation. These results demonstrate that spatially grounded perception, paired with diffusion-based imitation learning, offers a scalable and robust solution for general-purpose robotic grasping.
Similar Papers
RoboGrasp: A Universal Grasping Policy for Robust Robotic Control
Robotics
Robots learn to grab anything, anywhere, perfectly.
Bridging Perception and Action: Spatially-Grounded Mid-Level Representations for Robot Generalization
Robotics
Teaches robots to do tricky jobs better.
ZeroGrasp: Zero-Shot Shape Reconstruction Enabled Robotic Grasping
Robotics
Robots can grab things better by seeing them.