Score: 1

Can Argus Judge Them All? Comparing VLMs Across Domains

Published: June 23, 2025 | arXiv ID: 2507.01042v1

By: Harsh Joshi , Gautam Siddharth Kashyap , Rafiq Ali and more

BigTech Affiliations: Stanford University

Potential Business Impact:

Makes AI understand pictures and words better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Vision-Language Models (VLMs) are advancing multimodal AI, yet their performance consistency across tasks is underexamined. We benchmark CLIP, BLIP, and LXMERT across diverse datasets spanning retrieval, captioning, and reasoning. Our evaluation includes task accuracy, generation quality, efficiency, and a novel Cross-Dataset Consistency (CDC) metric. CLIP shows strongest generalization (CDC: 0.92), BLIP excels on curated data, and LXMERT leads in structured reasoning. These results expose trade-offs between generalization and specialization, informing industrial deployment of VLMs and guiding development toward robust, task-flexible architectures.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
8 pages

Category
Computer Science:
Information Retrieval