Guiding Multimodal Large Language Models with Blind and Low Vision People Visual Questions for Proactive Visual Interpretations
By: Ricardo Gonzalez Penuela, Felipe Arias-Russi, Victor Capriles
Potential Business Impact:
Helps blind people get answers they need faster.
Multimodal large language models (MLLMs) have been integrated into visual interpretation applications to support Blind and Low Vision (BLV) users because of their accuracy and ability to provide rich, human-like interpretations. However, these applications often default to comprehensive, lengthy descriptions regardless of context. This leads to inefficient exchanges, as users must go through irrelevant details rather than receiving the specific information they are likely to seek. To deliver more contextually-relevant information, we developed a system that draws on historical BLV users questions. When given an image, our system identifies similar past visual contexts from the VizWiz-LF dataset and uses the associated questions to guide the MLLM generate descriptions more relevant to BLV users. An evaluation with three human labelers who revised 92 context-aware and context-free descriptions showed that context-aware descriptions anticipated and answered users' questions in 76.1% of cases (70 out of 92) and were preferred in 54.4% of comparisons (50 out of 92). Our paper reviews, and data analysis are publicly available in a Github repository at https://github.com/rgonzalezp/guiding-multimodal-large-language-models-with-blind-and-low-vision-people-visual-questions .
Similar Papers
Towards Understanding the Use of MLLM-Enabled Applications for Visual Interpretation by Blind and Low Vision People
Human-Computer Interaction
Helps blind people understand the world better.
Towards Blind and Low-Vision Accessibility of Lightweight VLMs and Custom LLM-Evals
CV and Pattern Recognition
Helps blind people understand videos better.
"It's trained by non-disabled people": Evaluating How Image Quality Affects Product Captioning with VLMs
Human-Computer Interaction
Helps blind people understand products better.