UmniBench: Unified Understand and Generation Model Oriented Omni-dimensional Benchmark
By: Kai Liu , Leyang Chen , Wenbo Li and more
Unifying multimodal understanding and generation has shown impressive capabilities in cutting-edge proprietary systems. However, evaluations of unified multimodal models (UMMs) remain decoupled, assessing their understanding and generation abilities separately with corresponding datasets. To address this, we propose UmniBench, a benchmark tailored for UMMs with omni-dimensional evaluation. First, UmniBench can assess the understanding, generation, and editing ability within a single evaluation process. Based on human-examined prompts and QA pairs, UmniBench leverages UMM itself to evaluate its generation and editing ability with its understanding ability. This simple but effective paradigm allows comprehensive evaluation of UMMs. Second, UmniBench covers 13 major domains and more than 200 concepts, ensuring a thorough inspection of UMMs. Moreover, UmniBench can also decouple and separately evaluate understanding, generation, and editing abilities, providing a fine-grained assessment. Based on UmniBench, we benchmark 24 popular models, including both UMMs and single-ability large models. We hope this benchmark provides a more comprehensive and objective view of unified models and logistical support for improving the performance of the community model.
Similar Papers
Uni-MMMU: A Massive Multi-discipline Multimodal Unified Benchmark
CV and Pattern Recognition
Tests how well AI can see and create.
MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models
CV and Pattern Recognition
Tests AI that understands and creates with images and words.
MMAO-Bench: MultiModal All in One Benchmark Reveals Compositional Law between Uni-modal and Omni-modal in OmniModels
Computation and Language
Tests computers on seeing, hearing, and reading.