Score: 1

Evaluating Cell Type Inference in Vision Language Models Under Varying Visual Context

Published: June 15, 2025 | arXiv ID: 2506.12683v1

By: Samarth Singhal, Sandeep Singhal

Potential Business Impact:

AI can now help doctors look at slides.

Business Areas:
Image Recognition Data and Analytics, Software

Vision-Language Models (VLMs) have rapidly advanced alongside Large Language Models (LLMs). This study evaluates the capabilities of prominent generative VLMs, such as GPT-4.1 and Gemini 2.5 Pro, accessed via APIs, for histopathology image classification tasks, including cell typing. Using diverse datasets from public and private sources, we apply zero-shot and one-shot prompting methods to assess VLM performance, comparing them against custom-trained Convolutional Neural Networks (CNNs). Our findings demonstrate that while one-shot prompting significantly improves VLM performance over zero-shot ($p \approx 1.005 \times 10^{-5}$ based on Kappa scores), these general-purpose VLMs currently underperform supervised CNNs on most tasks. This work underscores both the promise and limitations of applying current VLMs to specialized domains like pathology via in-context learning. All code and instructions for reproducing the study can be accessed from the repository https://www.github.com/a12dongithub/VLMCCE.

Country of Origin
🇺🇸 United States

Page Count
5 pages

Category
Computer Science:
CV and Pattern Recognition