SceneScout: Towards AI Agent-driven Access to Street View Imagery for Blind Users
By: Gaurav Jain, Leah Findlater, Cole Gleason
Potential Business Impact:
Lets blind people "see" street views before traveling.
People who are blind or have low vision (BLV) may hesitate to travel independently in unfamiliar environments due to uncertainty about the physical landscape. While most tools focus on in-situ navigation, those exploring pre-travel assistance typically provide only landmarks and turn-by-turn instructions, lacking detailed visual context. Street view imagery, which contains rich visual information and has the potential to reveal numerous environmental details, remains inaccessible to BLV people. In this work, we introduce SceneScout, a multimodal large language model (MLLM)-driven AI agent that enables accessible interactions with street view imagery. SceneScout supports two modes: (1) Route Preview, enabling users to familiarize themselves with visual details along a route, and (2) Virtual Exploration, enabling free movement within street view imagery. Our user study (N=10) demonstrates that SceneScout helps BLV users uncover visual information otherwise unavailable through existing means. A technical evaluation shows that most descriptions are accurate (72%) and describe stable visual elements (95%) even in older imagery, though occasional subtle and plausible errors make them difficult to verify without sight. We discuss future opportunities and challenges of using street view imagery to enhance navigation experiences.
Similar Papers
StreetViewAI: Making Street View Accessible Using Context-Aware Multimodal AI
Human-Computer Interaction
Lets blind people explore the world virtually.
StreetViewAI: Making Street View Accessible Using Context-Aware Multimodal AI
Human-Computer Interaction
Lets blind people explore the world virtually.
StreetLens: Enabling Human-Centered AI Agents for Neighborhood Assessment from Street View Imagery
Human-Computer Interaction
Helps study neighborhoods faster with smart AI.