MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models
By: Wulin Xie , Yi-Fan Zhang , Chaoyou Fu and more
Potential Business Impact:
Tests AI that understands and creates with images and words.
Existing MLLM benchmarks face significant challenges in evaluating Unified MLLMs (U-MLLMs) due to: 1) lack of standardized benchmarks for traditional tasks, leading to inconsistent comparisons; 2) absence of benchmarks for mixed-modality generation, which fails to assess multimodal reasoning capabilities. We present a comprehensive evaluation framework designed to systematically assess U-MLLMs. Our benchmark includes: Standardized Traditional Task Evaluation. We sample from 12 datasets, covering 10 tasks with 30 subtasks, ensuring consistent and fair comparisons across studies." 2. Unified Task Assessment. We introduce five novel tasks testing multimodal reasoning, including image editing, commonsense QA with image generation, and geometric reasoning. 3. Comprehensive Model Benchmarking. We evaluate 12 leading U-MLLMs, such as Janus-Pro, EMU3, VILA-U, and Gemini2-flash, alongside specialized understanding (e.g., Claude-3.5-Sonnet) and generation models (e.g., DALL-E-3). Our findings reveal substantial performance gaps in existing U-MLLMs, highlighting the need for more robust models capable of handling mixed-modality tasks effectively. The code and evaluation data can be found in https://mme-unify.github.io/.
Similar Papers
Uni-MMMU: A Massive Multi-discipline Multimodal Unified Benchmark
CV and Pattern Recognition
Tests how well AI can see and create.
UPME: An Unsupervised Peer Review Framework for Multimodal Large Language Model Evaluation
CV and Pattern Recognition
Tests AI that sees and answers questions.
UniEval: Unified Holistic Evaluation for Unified Multimodal Understanding and Generation
CV and Pattern Recognition
Tests AI that understands and makes pictures and words.