Score: 0

Multimodal Human-Intent Modeling for Contextual Robot-to-Human Handovers of Arbitrary Objects

Published: August 5, 2025 | arXiv ID: 2508.02982v1

By: Lucas Chen , Guna Avula , Hanwen Ren and more

Potential Business Impact:

Robots learn to hand you things you want.

Human-robot object handover is a crucial element for assistive robots that aim to help people in their daily lives, including elderly care, hospitals, and factory floors. The existing approaches to solving these tasks rely on pre-selected target objects and do not contextualize human implicit and explicit preferences for handover, limiting natural and smooth interaction between humans and robots. These preferences can be related to the target object selection from the cluttered environment and to the way the robot should grasp the selected object to facilitate desirable human grasping during handovers. Therefore, this paper presents a unified approach that selects target distant objects using human verbal and non-verbal commands and performs the handover operation by contextualizing human implicit and explicit preferences to generate robot grasps and compliant handover motion sequences. We evaluate our integrated framework and its components through real-world experiments and user studies with arbitrary daily-life objects. The results of these evaluations demonstrate the effectiveness of our proposed pipeline in handling object handover tasks by understanding human preferences. Our demonstration videos can be found at https://youtu.be/6z27B2INl-s.

Country of Origin
🇺🇸 United States

Page Count
17 pages

Category
Computer Science:
Robotics