VRSight: An AI-Driven Scene Description System to Improve Virtual Reality Accessibility for Blind People
By: Daniel Killough , Justin Feng , Zheng Xue "ZX" Ching and more
Potential Business Impact:
Lets blind people explore virtual worlds.
Virtual Reality (VR) is inaccessible to blind people. While research has investigated many techniques to enhance VR accessibility, they require additional developer effort to integrate. As such, most mainstream VR apps remain inaccessible as the industry de-prioritizes accessibility. We present VRSight, an end-to-end system that recognizes VR scenes post hoc through a set of AI models (e.g., object detection, depth estimation, LLM-based atmosphere interpretation) and generates tone-based, spatial audio feedback, empowering blind users to interact in VR without developer intervention. To enable virtual element detection, we further contribute DISCOVR, a VR dataset consisting of 30 virtual object classes from 17 social VR apps, substituting real-world datasets that remain not applicable to VR contexts. Nine participants used VRSight to explore an off-the-shelf VR app (Rec Room), demonstrating its effectiveness in facilitating social tasks like avatar awareness and available seat identification.
Similar Papers
RAVEN: Realtime Accessibility in Virtual ENvironments for Blind and Low-Vision People
Human-Computer Interaction
Helps blind people explore virtual worlds by talking.
Investigating VR Accessibility Reviews for Users with Disabilities: A Qualitative Analysis
Software Engineering
Makes virtual reality games easier for disabled players.
Investigating VR Accessibility Reviews for Users with Disabilities: A Qualitative Analysis
Software Engineering
Makes VR games easier for people with disabilities.