Score: 1

Judge Model for Large-scale Multimodality Benchmarks

Published: January 3, 2026 | arXiv ID: 2601.06106v1

By: Min-Han Shih, Yu-Hsin Wu, Yu-Wei Chen

Potential Business Impact:

Tests AI's understanding of pictures, sound, and words.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

We propose a dedicated multimodal Judge Model designed to provide reliable, explainable evaluation across a diverse suite of tasks. Our benchmark spans text, audio, image, and video modalities, drawing from carefully sampled public datasets with fixed seeds to ensure reproducibility and minimize train test leakage. Instead of simple scoring, our framework aggregates multimodal judgments, analyzes the quality and reasoning consistency of model outputs, and generates diagnostic feedback. We evaluate several MLLMs, including Gemini 2.5, Phi 4, and Qwen 2.5, across 280 multimodal samples and compare judge model assessments with human annotators. Results show strong alignment between the Judge Model and human scores, demonstrating its potential as a scalable, interpretable evaluation pipeline for future multimodal AI research.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
9 pages

Category
Computer Science:
Machine Learning (CS)