FlySearch: Exploring how vision-language models explore
By: Adam Pardyl , Dominik Matuszek , Mateusz Przebieracz and more
Potential Business Impact:
Helps robots find things in the real world.
The real world is messy and unstructured. Uncovering critical information often requires active, goal-driven exploration. It remains to be seen whether Vision-Language Models (VLMs), which recently emerged as a popular zero-shot tool in many difficult tasks, can operate effectively in such conditions. In this paper, we answer this question by introducing FlySearch, a 3D, outdoor, photorealistic environment for searching and navigating to objects in complex scenes. We define three sets of scenarios with varying difficulty and observe that state-of-the-art VLMs cannot reliably solve even the simplest exploration tasks, with the gap to human performance increasing as the tasks get harder. We identify a set of central causes, ranging from vision hallucination, through context misunderstanding, to task planning failures, and we show that some of them can be addressed by finetuning. We publicly release the benchmark, scenarios, and the underlying codebase.
Similar Papers
Grounded Vision-Language Navigation for UAVs with Open-Vocabulary Goal Understanding
Robotics
Drones fly themselves using only words and eyes.
Efficient Navigation in Unknown Indoor Environments with Vision-Language Models
Robotics
Helps robots find the shortest path in new places.
ExploreVLM: Closed-Loop Robot Exploration Task Planning with Vision-Language Models
Robotics
Robots learn to explore and do tasks better.