MVU-Eval: Towards Multi-Video Understanding Evaluation for Multimodal LLMs
By: Tianhao Peng , Haochen Wang , Yuanxing Zhang and more
Potential Business Impact:
Tests AI's skill with many videos at once.
The advent of Multimodal Large Language Models (MLLMs) has expanded AI capabilities to visual modalities, yet existing evaluation benchmarks remain limited to single-video understanding, overlooking the critical need for multi-video understanding in real-world scenarios (e.g., sports analytics and autonomous driving). To address this significant gap, we introduce MVU-Eval, the first comprehensive benchmark for evaluating Multi-Video Understanding for MLLMs. Specifically, our MVU-Eval mainly assesses eight core competencies through 1,824 meticulously curated question-answer pairs spanning 4,959 videos from diverse domains, addressing both fundamental perception tasks and high-order reasoning tasks. These capabilities are rigorously aligned with real-world applications such as multi-sensor synthesis in autonomous systems and cross-angle sports analytics. Through extensive evaluation of state-of-the-art open-source and closed-source models, we reveal significant performance discrepancies and limitations in current MLLMs' ability to perform understanding across multiple videos. The benchmark will be made publicly available to foster future research.
Similar Papers
MT-Video-Bench: A Holistic Video Understanding Benchmark for Evaluating Multimodal LLMs in Multi-Turn Dialogues
CV and Pattern Recognition
Tests AI's ability to talk about videos.
VKnowU: Evaluating Visual Knowledge Understanding in Multimodal LLMs
CV and Pattern Recognition
Teaches computers to understand how the world works.
Video-MMLU: A Massive Multi-Discipline Lecture Understanding Benchmark
CV and Pattern Recognition
Helps computers understand complex lectures better.