Score: 1

Toward Human-Robot Teaming: Learning Handover Behaviors from 3D Scenes

Published: August 13, 2025 | arXiv ID: 2508.09855v1

By: Yuekun Wu , Yik Lung Pang , Andrea Cavallaro and more

Potential Business Impact:

Robots learn to grab objects from people.

Human-robot teaming (HRT) systems often rely on large-scale datasets of human and robot interactions, especially for close-proximity collaboration tasks such as human-robot handovers. Learning robot manipulation policies from raw, real-world image data requires a large number of robot-action trials in the physical environment. Although simulation training offers a cost-effective alternative, the visual domain gap between simulation and robot workspace remains a major limitation. We introduce a method for training HRT policies, focusing on human-to-robot handovers, solely from RGB images without the need for real-robot training or real-robot data collection. The goal is to enable the robot to reliably receive objects from a human with stable grasping while avoiding collisions with the human hand. The proposed policy learner leverages sparse-view Gaussian Splatting reconstruction of human-to-robot handover scenes to generate robot demonstrations containing image-action pairs captured with a camera mounted on the robot gripper. As a result, the simulated camera pose changes in the reconstructed scene can be directly translated into gripper pose changes. Experiments in both Gaussian Splatting reconstructed scene and real-world human-to-robot handover experiments demonstrate that our method serves as a new and effective representation for the human-to-robot handover task, contributing to more seamless and robust HRT.

Country of Origin
🇬🇧 🇨🇭 Switzerland, United Kingdom

Page Count
3 pages

Category
Computer Science:
Robotics