Score: 1

VideoWeave: A Data-Centric Approach for Efficient Video Understanding

Published: January 9, 2026 | arXiv ID: 2601.06309v1

By: Zane Durante , Silky Singh , Arpandeep Khatua and more

Potential Business Impact:

Makes AI understand long videos with less data.

Business Areas:
Image Recognition Data and Analytics, Software

Training video-language models is often prohibitively expensive due to the high cost of processing long frame sequences and the limited availability of annotated long videos. We present VideoWeave, a simple yet effective approach to improve data efficiency by constructing synthetic long-context training samples that splice together short, captioned videos from existing datasets. Rather than modifying model architectures or optimization objectives, VideoWeave reorganizes available video-text pairs to expand temporal diversity within fixed compute. We systematically study how different data composition strategies like random versus visually clustered splicing and caption enrichment affect downstream performance on downstream video question answering. Under identical compute constraints, models trained with VideoWeave achieve higher accuracy than conventional video finetuning. Our results highlight that reorganizing training data, rather than altering architectures, may offer a simple and scalable path for training video-language models. We link our code for all experiments here.

Page Count
11 pages

Category
Computer Science:
CV and Pattern Recognition