The perceptual gap between video see-through displays and natural human vision
By: Jialin Wang , Songming Ping , Kemu Xu and more
Potential Business Impact:
Virtual reality still can't match real-life vision.
Video see-through (VST) technology aims to seamlessly blend virtual and physical worlds by reconstructing reality through cameras. While manufacturers promise perceptual fidelity, it remains unclear how close these systems are to replicating natural human vision across varying environmental conditions. In this work, we quantify the perceptual gap between the human eye and different popular VST headsets (Apple Vision Pro, Meta Quest 3, Quest Pro) using psychophysical measures of visual acuity, contrast sensitivity, and color vision. We show that despite hardware advancements, all tested VST systems fail to match the dynamic range and adaptability of the naked eye. While high-end devices approach human performance in ideal lighting, they exhibit significant degradation in low-light conditions, particularly in contrast sensitivity and acuity. Our results map the physiological limitations of digital reality reconstruction, establishing a specific perceptual gap that defines the roadmap for achieving indistinguishable VST experiences.
Similar Papers
Geometry Aware Passthrough Mitigates Cybersickness
Human-Computer Interaction
Makes VR headsets less likely to make you sick.
See What I Mean? Mobile Eye-Perspective Rendering for Optical See-through Head-mounted Displays
Human-Computer Interaction
Makes AR glasses show things correctly.
VRSight: An AI-Driven Scene Description System to Improve Virtual Reality Accessibility for Blind People
Human-Computer Interaction
Lets blind people explore virtual worlds.