Score: 0

T2VWorldBench: A Benchmark for Evaluating World Knowledge in Text-to-Video Generation

Published: July 24, 2025 | arXiv ID: 2507.18107v1

By: Yubin Chen , Xuyang Guo , Zhenmei Shi and more

Potential Business Impact:

Tests if AI videos understand how the world works.

Business Areas:
Virtual World Community and Lifestyle, Media and Entertainment, Software

Text-to-video (T2V) models have shown remarkable performance in generating visually reasonable scenes, while their capability to leverage world knowledge for ensuring semantic consistency and factual accuracy remains largely understudied. In response to this challenge, we propose T2VWorldBench, the first systematic evaluation framework for evaluating the world knowledge generation abilities of text-to-video models, covering 6 major categories, 60 subcategories, and 1,200 prompts across a wide range of domains, including physics, nature, activity, culture, causality, and object. To address both human preference and scalable evaluation, our benchmark incorporates both human evaluation and automated evaluation using vision-language models (VLMs). We evaluated the 10 most advanced text-to-video models currently available, ranging from open source to commercial models, and found that most models are unable to understand world knowledge and generate truly correct videos. These findings point out a critical gap in the capability of current text-to-video models to leverage world knowledge, providing valuable research opportunities and entry points for constructing models with robust capabilities for commonsense reasoning and factual generation.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
32 pages

Category
Computer Science:
CV and Pattern Recognition