Utilizing Vision-Language Models as Action Models for Intent Recognition and Assistance
By: Cesar Alan Contreras , Manolis Chiou , Alireza Rastegarpanah and more
Potential Business Impact:
Robot understands what you want and helps you.
Human-robot collaboration requires robots to quickly infer user intent, provide transparent reasoning, and assist users in achieving their goals. Our recent work introduced GUIDER, our framework for inferring navigation and manipulation intents. We propose augmenting GUIDER with a vision-language model (VLM) and a text-only language model (LLM) to form a semantic prior that filters objects and locations based on the mission prompt. A vision pipeline (YOLO for object detection and the Segment Anything Model for instance segmentation) feeds candidate object crops into the VLM, which scores their relevance given an operator prompt; in addition, the list of detected object labels is ranked by a text-only LLM. These scores weight the existing navigation and manipulation layers of GUIDER, selecting context-relevant targets while suppressing unrelated objects. Once the combined belief exceeds a threshold, autonomy changes occur, enabling the robot to navigate to the desired area and retrieve the desired object, while adapting to any changes in the operator's intent. Future work will evaluate the system on Isaac Sim using a Franka Emika arm on a Ridgeback base, with a focus on real-time assistance.
Similar Papers
ExploreVLM: Closed-Loop Robot Exploration Task Planning with Vision-Language Models
Robotics
Robots learn to explore and do tasks better.
Hierarchical Language Models for Semantic Navigation and Manipulation in an Aerial-Ground Robotic System
Robotics
Robots work together better using AI to move things.
10 Open Challenges Steering the Future of Vision-Language-Action Models
Robotics
Robots learn to follow spoken commands and act.