T2VWorldBench: A Benchmark for Evaluating World Knowledge in Text-to-Video Generation
By: Yubin Chen , Xuyang Guo , Zhenmei Shi and more
Potential Business Impact:
Tests if AI videos understand how the world works.
Text-to-video (T2V) models have shown remarkable performance in generating visually reasonable scenes, while their capability to leverage world knowledge for ensuring semantic consistency and factual accuracy remains largely understudied. In response to this challenge, we propose T2VWorldBench, the first systematic evaluation framework for evaluating the world knowledge generation abilities of text-to-video models, covering 6 major categories, 60 subcategories, and 1,200 prompts across a wide range of domains, including physics, nature, activity, culture, causality, and object. To address both human preference and scalable evaluation, our benchmark incorporates both human evaluation and automated evaluation using vision-language models (VLMs). We evaluated the 10 most advanced text-to-video models currently available, ranging from open source to commercial models, and found that most models are unable to understand world knowledge and generate truly correct videos. These findings point out a critical gap in the capability of current text-to-video models to leverage world knowledge, providing valuable research opportunities and entry points for constructing models with robust capabilities for commonsense reasoning and factual generation.
Similar Papers
VideoVerse: How Far is Your T2V Generator from a World Model?
CV and Pattern Recognition
Tests if AI can make videos that make sense.
T2VTextBench: A Human Evaluation Benchmark for Textual Control in Video Generation Models
CV and Pattern Recognition
Makes videos show words correctly.
PhyEduVideo: A Benchmark for Evaluating Text-to-Video Models for Physics Education
CV and Pattern Recognition
AI makes physics videos for learning.