Score: 0

Resampling Benchmark for Efficient Comprehensive Evaluation of Large Vision-Language Models

Published: April 14, 2025 | arXiv ID: 2504.09979v1

By: Teppei Suzuki, Keisuke Ozawa

Potential Business Impact:

Tests smart AI faster and more fairly.

Business Areas:
Image Recognition Data and Analytics, Software

We propose an efficient evaluation protocol for large vision-language models (VLMs). Given their broad knowledge and reasoning capabilities, multiple benchmarks are needed for comprehensive assessment, making evaluation computationally expensive. To improve efficiency, we construct a subset that yields results comparable to full benchmark evaluations. Our benchmark classification experiments reveal that no single benchmark fully covers all challenges. We then introduce a subset construction method using farthest point sampling (FPS). Our experiments show that FPS-based benchmarks maintain a strong correlation (> 0.96) with full evaluations while using only ~1\% of the data. Additionally, applying FPS to an existing benchmark improves correlation with overall evaluation results, suggesting its potential to reduce unintended dataset biases.

Page Count
15 pages

Category
Computer Science:
CV and Pattern Recognition