Score: 0

Indicating Robot Vision Capabilities with Augmented Reality

Published: November 5, 2025 | arXiv ID: 2511.03550v1

By: Hong Wang , Ridhima Phatak , James Ocampo and more

Potential Business Impact:

Shows robots' vision in games to avoid mistakes.

Business Areas:
Augmented Reality Hardware, Software

Research indicates that humans can mistakenly assume that robots and humans have the same field of view (FoV), possessing an inaccurate mental model of robots. This misperception may lead to failures during human-robot collaboration tasks where robots might be asked to complete impossible tasks about out-of-view objects. The issue is more severe when robots do not have a chance to scan the scene to update their world model while focusing on assigned tasks. To help align humans' mental models of robots' vision capabilities, we propose four FoV indicators in augmented reality (AR) and conducted a user human-subjects experiment (N=41) to evaluate them in terms of accuracy, confidence, task efficiency, and workload. These indicators span a spectrum from egocentric (robot's eye and head space) to allocentric (task space). Results showed that the allocentric blocks at the task space had the highest accuracy with a delay in interpreting the robot's FoV. The egocentric indicator of deeper eye sockets, possible for physical alteration, also increased accuracy. In all indicators, participants' confidence was high while cognitive load remained low. Finally, we contribute six guidelines for practitioners to apply our AR indicators or physical alterations to align humans' mental models with robots' vision capabilities.

Country of Origin
🇺🇸 United States

Page Count
20 pages

Category
Computer Science:
Robotics