V2P-Bench: Evaluating Video-Language Understanding with Visual Prompts for Better Human-Model Interaction
By: Yiming Zhao , Yu Zeng , Yukun Qi and more
Potential Business Impact:
Tests how well computers understand videos.
Large Vision-Language Models (LVLMs) have made significant progress in the field of video understanding recently. However, current benchmarks uniformly lean on text prompts for evaluation, which often necessitate complex referential language and fail to provide precise spatial and temporal references. This limitation diminishes the experience and efficiency of human-model interaction. To address this limitation, we propose the Video Visual Prompt Benchmark(V2P-Bench), a comprehensive benchmark specifically designed to evaluate LVLMs' video understanding capabilities in multimodal human-model interaction scenarios. V2P-Bench includes 980 unique videos and 1,172 QA pairs, covering 5 main tasks and 12 dimensions, facilitating instance-level fine-grained understanding aligned with human cognition. Benchmarking results reveal that even the most powerful models perform poorly on V2P-Bench (65.4% for GPT-4o and 67.9% for Gemini-1.5-Pro), significantly lower than the human experts' 88.3%, highlighting the current shortcomings of LVLMs in understanding video visual prompts. We hope V2P-Bench will serve as a foundation for advancing multimodal human-model interaction and video understanding evaluation. Project page: https://github.com/gaotiexinqu/V2P-Bench.
Similar Papers
Modeling Variants of Prompts for Vision-Language Models
CV and Pattern Recognition
Makes AI understand pictures better with any words.
IV-Bench: A Benchmark for Image-Grounded Video Perception and Reasoning in Multimodal LLMs
CV and Pattern Recognition
Tests how well AI understands videos with pictures.
Video-Bench: Human-Aligned Video Generation Benchmark
CV and Pattern Recognition
Tests AI videos to match what people like.