OpenGuide: Assistive Object Retrieval in Indoor Spaces for Individuals with Visual Impairments
By: Yifan Xu , Qianwei Wang , Vineet Kamat and more
Potential Business Impact:
Robot finds lost things for blind people.
Indoor built environments like homes and offices often present complex and cluttered layouts that pose significant challenges for individuals who are blind or visually impaired, especially when performing tasks that involve locating and gathering multiple objects. While many existing assistive technologies focus on basic navigation or obstacle avoidance, few systems provide scalable and efficient multi-object search capabilities in real-world, partially observable settings. To address this gap, we introduce OpenGuide, an assistive mobile robot system that combines natural language understanding with vision-language foundation models (VLM), frontier-based exploration, and a Partially Observable Markov Decision Process (POMDP) planner. OpenGuide interprets open-vocabulary requests, reasons about object-scene relationships, and adaptively navigates and localizes multiple target items in novel environments. Our approach enables robust recovery from missed detections through value decay and belief-space reasoning, resulting in more effective exploration and object localization. We validate OpenGuide in simulated and real-world experiments, demonstrating substantial improvements in task success rate and search efficiency over prior methods. This work establishes a foundation for scalable, human-centered robotic assistance in assisted living environments.
Similar Papers
OVAMOS: A Framework for Open-Vocabulary Multi-Object Search in Unknown Environments
Robotics
Helps robots find many hidden things in new places.
GuideNav: User-Informed Development of a Vision-Only Robotic Navigation Assistant For Blind Travelers
Robotics
Helps robots learn paths like guide dogs.
Utilizing Vision-Language Models as Action Models for Intent Recognition and Assistance
Robotics
Robot understands what you want and helps you.