"I Can See Forever!": Evaluating Real-time VideoLLMs for Assisting Individuals with Visual Impairments
By: Ziyi Zhang , Zhen Sun , Zongmin Zhang and more
Potential Business Impact:
Helps blind people navigate daily life safely.
The visually impaired population, especially the severely visually impaired, is currently large in scale, and daily activities pose significant challenges for them. Although many studies use large language and vision-language models to assist the blind, most focus on static content and fail to meet real-time perception needs in dynamic and complex environments, such as daily activities. To provide them with more effective intelligent assistance, it is imperative to incorporate advanced visual understanding technologies. Although real-time vision and speech interaction VideoLLMs demonstrate strong real-time visual understanding, no prior work has systematically evaluated their effectiveness in assisting visually impaired individuals. In this work, we conduct the first such evaluation. First, we construct a benchmark dataset (VisAssistDaily), covering three categories of assistive tasks for visually impaired individuals: Basic Skills, Home Life Tasks, and Social Life Tasks. The results show that GPT-4o achieves the highest task success rate. Next, we conduct a user study to evaluate the models in both closed-world and open-world scenarios, further exploring the practical challenges of applying VideoLLMs in assistive contexts. One key issue we identify is the difficulty current models face in perceiving potential hazards in dynamic environments. To address this, we build an environment-awareness dataset named SafeVid and introduce a polling mechanism that enables the model to proactively detect environmental risks. We hope this work provides valuable insights and inspiration for future research in this field.
Similar Papers
A Large Vision-Language Model based Environment Perception System for Visually Impaired People
CV and Pattern Recognition
Helps blind people "see" their surroundings with AI.
GazeLLM: Multimodal LLMs incorporating Human Visual Attention
Human-Computer Interaction
Lets computers understand videos by watching eyes.
Probing the Gaps in ChatGPT Live Video Chat for Real-World Assistance for People who are Blind or Visually Impaired
Human-Computer Interaction
AI helps blind people see with live video.