Score: 1

FunBench: Benchmarking Fundus Reading Skills of MLLMs

Published: March 2, 2025 | arXiv ID: 2503.00901v1

By: Qijie Wei, Kaiheng Qian, Xirong Li

Potential Business Impact:

Helps AI understand eye pictures to find diseases.

Business Areas:
Image Recognition Data and Analytics, Software

Multimodal Large Language Models (MLLMs) have shown significant potential in medical image analysis. However, their capabilities in interpreting fundus images, a critical skill for ophthalmology, remain under-evaluated. Existing benchmarks lack fine-grained task divisions and fail to provide modular analysis of its two key modules, i.e., large language model (LLM) and vision encoder (VE). This paper introduces FunBench, a novel visual question answering (VQA) benchmark designed to comprehensively evaluate MLLMs' fundus reading skills. FunBench features a hierarchical task organization across four levels (modality perception, anatomy perception, lesion analysis, and disease diagnosis). It also offers three targeted evaluation modes: linear-probe based VE evaluation, knowledge-prompted LLM evaluation, and holistic evaluation. Experiments on nine open-source MLLMs plus GPT-4o reveal significant deficiencies in fundus reading skills, particularly in basic tasks such as laterality recognition. The results highlight the limitations of current MLLMs and emphasize the need for domain-specific training and improved LLMs and VEs.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
7 pages

Category
Computer Science:
CV and Pattern Recognition