Multimodal Human-Intent Modeling for Contextual Robot-to-Human Handovers of Arbitrary Objects
By: Lucas Chen , Guna Avula , Hanwen Ren and more
Potential Business Impact:
Robots learn to hand you things you want.
Human-robot object handover is a crucial element for assistive robots that aim to help people in their daily lives, including elderly care, hospitals, and factory floors. The existing approaches to solving these tasks rely on pre-selected target objects and do not contextualize human implicit and explicit preferences for handover, limiting natural and smooth interaction between humans and robots. These preferences can be related to the target object selection from the cluttered environment and to the way the robot should grasp the selected object to facilitate desirable human grasping during handovers. Therefore, this paper presents a unified approach that selects target distant objects using human verbal and non-verbal commands and performs the handover operation by contextualizing human implicit and explicit preferences to generate robot grasps and compliant handover motion sequences. We evaluate our integrated framework and its components through real-world experiments and user studies with arbitrary daily-life objects. The results of these evaluations demonstrate the effectiveness of our proposed pipeline in handling object handover tasks by understanding human preferences. Our demonstration videos can be found at https://youtu.be/6z27B2INl-s.
Similar Papers
A Virtual Mechanical Interaction Layer Enables Resilient Human-to-Robot Object Handovers
Robotics
Robot learns to catch objects from people better.
A Generative System for Robot-to-Human Handovers: from Intent Inference to Spatial Configuration Imagery
Robotics
Robots learn to hand things to people smoothly.
Modeling Dynamic Hand-Object Interactions with Applications to Human-Robot Handovers
Robotics
Robots learn to move and hand objects like people.