Score: 1

Evaluating Robustness of Vision-Language Models Under Noisy Conditions

Published: September 15, 2025 | arXiv ID: 2509.12492v1

By: Purushoth, Alireza

Potential Business Impact:

Tests how well AI sees and understands pictures.

Business Areas:
Image Recognition Data and Analytics, Software

Vision-Language Models (VLMs) have attained exceptional success across multimodal tasks such as image captioning and visual question answering. However, their robustness under noisy conditions remains unfamiliar. In this study, we present a comprehensive evaluation framework to evaluate the performance of several state-of-the-art VLMs under controlled perturbations, including lighting variation, motion blur, and compression artifacts. We used both lexical-based metrics (BLEU, METEOR, ROUGE, CIDEr) and neural-based similarity measures using sentence embeddings to quantify semantic alignment. Our experiments span diverse datasets, revealing key insights: (1) descriptiveness of ground-truth captions significantly influences model performance; (2) larger models like LLaVA excel in semantic understanding but do not universally outperform smaller models; and (3) certain noise types, such as JPEG compression and motion blur, dramatically degrade performance across models. Our findings highlight the nuanced trade-offs between model size, dataset characteristics, and noise resilience, offering a standardized benchmark for future robust multimodal learning.

Country of Origin
🇺🇸 United States

Page Count
6 pages

Category
Computer Science:
CV and Pattern Recognition