Score: 0

"It's trained by non-disabled people": Evaluating How Image Quality Affects Product Captioning with VLMs

Published: November 12, 2025 | arXiv ID: 2511.08917v2

By: Kapil Garg , Xinru Tang , Jimin Heo and more

Potential Business Impact:

Helps blind people understand products better.

Business Areas:
Visual Search Internet Services

Vision-Language Models (VLMs) are increasingly used by blind and low-vision (BLV) people to identify and understand products in their everyday lives, such as food, personal products, and household goods. Despite their prevalence, we lack an empirical understanding of how common image quality issues, like blur and misframing of items, affect the accuracy of VLM-generated captions and whether resulting captions meet BLV people's information needs. Grounded in a survey with 86 BLV people, we systematically evaluate how image quality issues affect captions generated by VLMs. We show that the best model recognizes products in images with no quality issues with 98% accuracy, but drops to 75% accuracy overall when quality issues are present, worsening considerably as issues compound. We discuss the need for model evaluations that center on disabled people's experiences throughout the process and offer concrete recommendations for HCI and ML researchers to make VLMs more reliable for BLV people.

Country of Origin
🇺🇸 United States

Page Count
35 pages

Category
Computer Science:
Human-Computer Interaction