Are Synthetic Videos Useful? A Benchmark for Retrieval-Centric Evaluation of Synthetic Videos
By: Zecheng Zhao , Selena Song , Tong Chen and more
Potential Business Impact:
Makes videos better for searching.
Text-to-video (T2V) synthesis has advanced rapidly, yet current evaluation metrics primarily capture visual quality and temporal consistency, offering limited insight into how synthetic videos perform in downstream tasks such as text-to-video retrieval (TVR). In this work, we introduce SynTVA, a new dataset and benchmark designed to evaluate the utility of synthetic videos for building retrieval models. Based on 800 diverse user queries derived from MSRVTT training split, we generate synthetic videos using state-of-the-art T2V models and annotate each video-text pair along four key semantic alignment dimensions: Object \& Scene, Action, Attribute, and Prompt Fidelity. Our evaluation framework correlates general video quality assessment (VQA) metrics with these alignment scores, and examines their predictive power for downstream TVR performance. To explore pathways of scaling up, we further develop an Auto-Evaluator to estimate alignment quality from existing metrics. Beyond benchmarking, our results show that SynTVA is a valuable asset for dataset augmentation, enabling the selection of high-utility synthetic samples that measurably improve TVR outcomes. Project page and dataset can be found at https://jasoncodemaker.github.io/SynTVA/.
Similar Papers
Towards Scalable Video Anomaly Retrieval: A Synthetic Video-Text Benchmark
CV and Pattern Recognition
Find bad things in videos using words.
Can Text-to-Video Generation help Video-Language Alignment?
CV and Pattern Recognition
Makes computers understand videos better with fake examples.
T2VEval: Benchmark Dataset and Objective Evaluation Method for T2V-generated Videos
CV and Pattern Recognition
Helps check if computer-made videos match their words.