AR as an Evaluation Playground: Bridging Metrics and Visual Perception of Computer Vision Models
By: Ashkan Ganj, Yiqin Zhao, Tian Guo
Potential Business Impact:
Lets people test computer vision with games.
Human perception studies can provide complementary insights to qualitative evaluation for understanding computer vision (CV) model performance. However, conducting human perception studies remains a non-trivial task, it often requires complex, end-to-end system setups that are time-consuming and difficult to scale. In this paper, we explore the unique opportunity presented by augmented reality (AR) for helping CV researchers to conduct perceptual studies. We design ARCADE, an evaluation platform that allows researchers to easily leverage AR's rich context and interactivity for human-centered CV evaluation. Specifically, ARCADE supports cross-platform AR data collection, custom experiment protocols via pluggable model inference, and AR streaming for user studies. We demonstrate ARCADE using two types of CV models, depth and lighting estimation and show that AR tasks can be effectively used to elicit human perceptual judgments of model quality. We also evaluate the systems usability and performance across different deployment and study settings, highlighting its flexibility and effectiveness as a human-centered evaluation platform.
Similar Papers
Investigating Encoding and Perspective for Augmented Reality
Human-Computer Interaction
Helps AR guide any body movement, not just arms.
Investigating Search Among Physical and Virtual Objects Under Different Lighting Conditions
Human-Computer Interaction
Makes games appear in the real world.
Toward Safe, Trustworthy and Realistic Augmented Reality User Experience
CV and Pattern Recognition
Keeps augmented reality safe from bad virtual things.