Score: 0

Utilizing Vision-Language Models as Action Models for Intent Recognition and Assistance

Published: August 14, 2025 | arXiv ID: 2508.11093v1

By: Cesar Alan Contreras , Manolis Chiou , Alireza Rastegarpanah and more

Potential Business Impact:

Robot understands what you want and helps you.

Human-robot collaboration requires robots to quickly infer user intent, provide transparent reasoning, and assist users in achieving their goals. Our recent work introduced GUIDER, our framework for inferring navigation and manipulation intents. We propose augmenting GUIDER with a vision-language model (VLM) and a text-only language model (LLM) to form a semantic prior that filters objects and locations based on the mission prompt. A vision pipeline (YOLO for object detection and the Segment Anything Model for instance segmentation) feeds candidate object crops into the VLM, which scores their relevance given an operator prompt; in addition, the list of detected object labels is ranked by a text-only LLM. These scores weight the existing navigation and manipulation layers of GUIDER, selecting context-relevant targets while suppressing unrelated objects. Once the combined belief exceeds a threshold, autonomy changes occur, enabling the robot to navigate to the desired area and retrieve the desired object, while adapting to any changes in the operator's intent. Future work will evaluate the system on Isaac Sim using a Franka Emika arm on a Ridgeback base, with a focus on real-time assistance.

Country of Origin
🇬🇧 United Kingdom

Page Count
3 pages

Category
Computer Science:
Robotics