Gaze-supported Large Language Model Framework for Bi-directional Human-Robot Interaction
By: Jens V. Rüppel , Andrey Rudenko , Tim Schreiter and more
Potential Business Impact:
Robots understand you better by watching and listening.
The rapid development of Large Language Models (LLMs) creates an exciting potential for flexible, general knowledge-driven Human-Robot Interaction (HRI) systems for assistive robots. Existing HRI systems demonstrate great progress in interpreting and following user instructions, action generation, and robot task solving. On the other hand, bi-directional, multi-modal, and context-aware support of the user in collaborative tasks still remains an open challenge. In this paper, we present a gaze- and speech-informed interface to the assistive robot, which is able to perceive the working environment from multiple vision inputs and support the dynamic user in their tasks. Our system is designed to be modular and transferable to adapt to diverse tasks and robots, and it is capable of real-time use of language-based interaction state representation and fast on board perception modules. Its development was supported by multiple public dissemination events, contributing important considerations for improved robustness and user experience. Furthermore, in two lab studies, we compare the performance and user ratings of our system with those of a traditional scripted HRI pipeline. Our findings indicate that an LLM-based approach enhances adaptability and marginally improves user engagement and task execution metrics but may produce redundant output, while a scripted pipeline is well suited for more straightforward tasks.
Similar Papers
Agreeing to Interact in Human-Robot Interaction using Large Language Models and Vision Language Models
Human-Computer Interaction
Helps robots know when to start talking to people.
Natural Multimodal Fusion-Based Human-Robot Interaction: Application With Voice and Deictic Posture via Large Language Model
Robotics
Robots understand what you want by voice and pointing.
SemanticScanpath: Combining Gaze and Speech for Situated Human-Robot Interaction Using LLMs
Human-Computer Interaction
Robots understand what you mean by looking.