Not There Yet: Evaluating Vision Language Models in Simulating the Visual Perception of People with Low Vision
By: Rosiana Natalie , Wenqian Xu , Ruei-Che Chang and more
Potential Business Impact:
Helps computers understand how people with poor vision see.
Advances in vision language models (VLMs) have enabled the simulation of general human behavior through their reasoning and problem solving capabilities. However, prior research has not investigated such simulation capabilities in the accessibility domain. In this paper, we evaluate the extent to which VLMs can simulate the vision perception of low vision individuals when interpreting images. We first compile a benchmark dataset through a survey study with 40 low vision participants, collecting their brief and detailed vision information and both open-ended and multiple-choice image perception and recognition responses to up to 25 images. Using these responses, we construct prompts for VLMs (GPT-4o) to create simulated agents of each participant, varying the included information on vision information and example image responses. We evaluate the agreement between VLM-generated responses and participants' original answers. Our results indicate that VLMs tend to infer beyond the specified vision ability when given minimal prompts, resulting in low agreement (0.59). The agreement between the agent' and participants' responses remains low when only either the vision information (0.59) or example image responses (0.59) are provided, whereas a combination of both significantly increase the agreement (0.70, p < 0.0001). Notably, a single example combining both open-ended and multiple-choice responses, offers significant performance improvements over either alone (p < 0.0001), while additional examples provided minimal benefits (p > 0.05).
Similar Papers
Vision language models have difficulty recognizing virtual objects
CV and Pattern Recognition
AI struggles to imagine unseen objects in pictures.
Can Vision Language Models Infer Human Gaze Direction? A Controlled Study
CV and Pattern Recognition
Computers can't tell where people are looking.
Examining Vision Language Models through Multi-dimensional Experiments with Vision and Text Features
CV and Pattern Recognition
Fixes AI mistakes when looking at pictures.