A Multi-Modal Interaction Framework for Efficient Human-Robot Collaborative Shelf Picking
By: Abhinav Pathak , Kalaichelvi Venkatesan , Tarek Taha and more
Potential Business Impact:
Robot helps people pick boxes by understanding gestures.
The growing presence of service robots in human-centric environments, such as warehouses, demands seamless and intuitive human-robot collaboration. In this paper, we propose a collaborative shelf-picking framework that combines multimodal interaction, physics-based reasoning, and task division for enhanced human-robot teamwork. The framework enables the robot to recognize human pointing gestures, interpret verbal cues and voice commands, and communicate through visual and auditory feedback. Moreover, it is powered by a Large Language Model (LLM) which utilizes Chain of Thought (CoT) and a physics-based simulation engine for safely retrieving cluttered stacks of boxes on shelves, relationship graph for sub-task generation, extraction sequence planning and decision making. Furthermore, we validate the framework through real-world shelf picking experiments such as 1) Gesture-Guided Box Extraction, 2) Collaborative Shelf Clearing and 3) Collaborative Stability Assistance.
Similar Papers
Designing Intent: A Multimodal Framework for Human-Robot Cooperation in Industrial Workspaces
Human-Computer Interaction
Helps robots and people work together safely.
Multimodal "Puppeteer": An Exploration of Robot Teleoperation Via Virtual Counterpart with LLM-Driven Voice and Gesture Interaction in Augmented Reality
Human-Computer Interaction
Control robots with your voice and hands.
CollaBot: Vision-Language Guided Simultaneous Collaborative Manipulation
Robotics
Robots work together to move big things.