Score: 1

ColorBlindnessEval: Can Vision-Language Models Pass Color Blindness Tests?

Published: September 23, 2025 | arXiv ID: 2509.19070v1

By: Zijian Ling , Han Zhang , Yazhuo Zhou and more

Potential Business Impact:

Tests if AI can see numbers like colorblind people.

Business Areas:
Visual Search Internet Services

This paper presents ColorBlindnessEval, a novel benchmark designed to evaluate the robustness of Vision-Language Models (VLMs) in visually adversarial scenarios inspired by the Ishihara color blindness test. Our dataset comprises 500 Ishihara-like images featuring numbers from 0 to 99 with varying color combinations, challenging VLMs to accurately recognize numerical information embedded in complex visual patterns. We assess 9 VLMs using Yes/No and open-ended prompts and compare their performance with human participants. Our experiments reveal limitations in the models' ability to interpret numbers in adversarial contexts, highlighting prevalent hallucination issues. These findings underscore the need to improve the robustness of VLMs in complex visual environments. ColorBlindnessEval serves as a valuable tool for benchmarking and improving the reliability of VLMs in real-world applications where accuracy is critical.

Repos / Data Links

Page Count
12 pages

Category
Computer Science:
CV and Pattern Recognition