Indicating Robot Vision Capabilities with Augmented Reality
By: Hong Wang , Ridhima Phatak , James Ocampo and more
Potential Business Impact:
Shows robots' vision in games to avoid mistakes.
Research indicates that humans can mistakenly assume that robots and humans have the same field of view (FoV), possessing an inaccurate mental model of robots. This misperception may lead to failures during human-robot collaboration tasks where robots might be asked to complete impossible tasks about out-of-view objects. The issue is more severe when robots do not have a chance to scan the scene to update their world model while focusing on assigned tasks. To help align humans' mental models of robots' vision capabilities, we propose four FoV indicators in augmented reality (AR) and conducted a user human-subjects experiment (N=41) to evaluate them in terms of accuracy, confidence, task efficiency, and workload. These indicators span a spectrum from egocentric (robot's eye and head space) to allocentric (task space). Results showed that the allocentric blocks at the task space had the highest accuracy with a delay in interpreting the robot's FoV. The egocentric indicator of deeper eye sockets, possible for physical alteration, also increased accuracy. In all indicators, participants' confidence was high while cognitive load remained low. Finally, we contribute six guidelines for practitioners to apply our AR indicators or physical alterations to align humans' mental models with robots' vision capabilities.
Similar Papers
Impact of Gaze-Based Interaction and Augmentation on Human-Robot Collaboration in Critical Tasks
Robotics
Helps robots find people faster using eye movements.
AR as an Evaluation Playground: Bridging Metrics and Visual Perception of Computer Vision Models
CV and Pattern Recognition
Lets people test computer vision with games.
Toward Safe, Trustworthy and Realistic Augmented Reality User Experience
CV and Pattern Recognition
Keeps augmented reality safe from bad virtual things.