"It's trained by non-disabled people": Evaluating How Image Quality Affects Product Captioning with VLMs
By: Kapil Garg , Xinru Tang , Jimin Heo and more
Potential Business Impact:
Helps blind people understand products better.
Vision-Language Models (VLMs) are increasingly used by blind and low-vision (BLV) people to identify and understand products in their everyday lives, such as food, personal products, and household goods. Despite their prevalence, we lack an empirical understanding of how common image quality issues, like blur and misframing of items, affect the accuracy of VLM-generated captions and whether resulting captions meet BLV people's information needs. Grounded in a survey with 86 BLV people, we systematically evaluate how image quality issues affect captions generated by VLMs. We show that the best model recognizes products in images with no quality issues with 98% accuracy, but drops to 75% accuracy overall when quality issues are present, worsening considerably as issues compound. We discuss the need for model evaluations that center on disabled people's experiences throughout the process and offer concrete recommendations for HCI and ML researchers to make VLMs more reliable for BLV people.
Similar Papers
"It's trained by non-disabled people": Evaluating How Image Quality Affects Product Captioning with VLMs
Human-Computer Interaction
Helps blind people understand products better.
Towards Blind and Low-Vision Accessibility of Lightweight VLMs and Custom LLM-Evals
CV and Pattern Recognition
Helps blind people understand videos better.
Bias in the Picture: Benchmarking VLMs with Social-Cue News Images and LLM-as-Judge Assessment
CV and Pattern Recognition
Finds and fixes unfairness in AI that sees and reads.