VQualA 2025 Challenge on Visual Quality Comparison for Large Multimodal Models: Methods and Results
By: Hanwei Zhu , Haoning Wu , Zicheng Zhang and more
Potential Business Impact:
Helps computers judge picture quality better.
This paper presents a summary of the VQualA 2025 Challenge on Visual Quality Comparison for Large Multimodal Models (LMMs), hosted as part of the ICCV 2025 Workshop on Visual Quality Assessment. The challenge aims to evaluate and enhance the ability of state-of-the-art LMMs to perform open-ended and detailed reasoning about visual quality differences across multiple images. To this end, the competition introduces a novel benchmark comprising thousands of coarse-to-fine grained visual quality comparison tasks, spanning single images, pairs, and multi-image groups. Each task requires models to provide accurate quality judgments. The competition emphasizes holistic evaluation protocols, including 2AFC-based binary preference and multi-choice questions (MCQs). Around 100 participants submitted entries, with five models demonstrating the emerging capabilities of instruction-tuned LMMs on quality assessment. This challenge marks a significant step toward open-domain visual quality reasoning and comparison and serves as a catalyst for future research on interpretable and human-aligned quality evaluation systems.
Similar Papers
VQualA 2025 Challenge on Face Image Quality Assessment: Methods and Results
CV and Pattern Recognition
Improves how computers judge photo quality.
Q-CLIP: Unleashing the Power of Vision-Language Models for Video Quality Assessment through Unified Cross-Modal Adaptation
CV and Pattern Recognition
Makes computers judge video quality better, faster.
FVQ: A Large-Scale Dataset and A LMM-based Method for Face Video Quality Assessment
CV and Pattern Recognition
Rates face video quality like a human.