Rethinking Visual Intelligence: Insights from Video Pretraining
By: Pablo Acuaviva , Aram Davtyan , Mariam Hassan and more
Potential Business Impact:
Video models learn faster than text models.
Large language models (LLMs) have demonstrated that large-scale pretraining enables systems to adapt rapidly to new problems with little supervision in the language domain. This success, however, has not translated as effectively to the visual domain, where models, including LLMs, continue to struggle with compositional understanding, sample efficiency, and general-purpose problem-solving. We investigate Video Diffusion Models (VDMs) as a promising direction for bridging this gap. Pretraining on spatiotemporal data endows these models with strong inductive biases for structure and dynamics, which we hypothesize can support broad task adaptability. To test this, we design a controlled evaluation in which both a pretrained LLM and a pretrained VDM are equipped with lightweight adapters and presented with tasks in their natural modalities. Across benchmarks including ARC-AGI, ConceptARC, visual games, route planning, and cellular automata, VDMs demonstrate higher data efficiency than their language counterparts. Taken together, our results indicate that video pretraining offers inductive biases that support progress toward visual foundation models.
Similar Papers
From Generation to Generalization: Emergent Few-Shot Learning in Video Diffusion Models
CV and Pattern Recognition
Teaches computers to understand and do many visual tasks.
Compression then Matching: An Efficient Pre-training Paradigm for Multimodal Embedding
CV and Pattern Recognition
Makes computers understand pictures and words together better.
Infusing fine-grained visual knowledge to Vision-Language Models
CV and Pattern Recognition
Keeps AI smart while teaching new skills.