VideoWeave: A Data-Centric Approach for Efficient Video Understanding
By: Zane Durante , Silky Singh , Arpandeep Khatua and more
Potential Business Impact:
Makes AI understand long videos with less data.
Training video-language models is often prohibitively expensive due to the high cost of processing long frame sequences and the limited availability of annotated long videos. We present VideoWeave, a simple yet effective approach to improve data efficiency by constructing synthetic long-context training samples that splice together short, captioned videos from existing datasets. Rather than modifying model architectures or optimization objectives, VideoWeave reorganizes available video-text pairs to expand temporal diversity within fixed compute. We systematically study how different data composition strategies like random versus visually clustered splicing and caption enrichment affect downstream performance on downstream video question answering. Under identical compute constraints, models trained with VideoWeave achieve higher accuracy than conventional video finetuning. Our results highlight that reorganizing training data, rather than altering architectures, may offer a simple and scalable path for training video-language models. We link our code for all experiments here.
Similar Papers
VideoCompressa: Data-Efficient Video Understanding via Joint Temporal Compression and Spatial Reconstruction
CV and Pattern Recognition
Makes AI learn from videos using way less data.
WorldWeaver: Generating Long-Horizon Video Worlds via Rich Perception
CV and Pattern Recognition
Makes videos look real for longer without errors.
FilmWeaver: Weaving Consistent Multi-Shot Videos with Cache-Guided Autoregressive Diffusion
CV and Pattern Recognition
Makes videos stay the same person and place.