BabyVLM-V2: Toward Developmentally Grounded Pretraining and Benchmarking of Vision Foundation Models
By: Shengao Wang , Wenqi Wang , Zecheng Wang and more
Potential Business Impact:
Teaches computers to learn like babies.
Early children's developmental trajectories set up a natural goal for sample-efficient pretraining of vision foundation models. We introduce BabyVLM-V2, a developmentally grounded framework for infant-inspired vision-language modeling that extensively improves upon BabyVLM-V1 through a longitudinal, multifaceted pretraining set, a versatile model, and, most importantly, DevCV Toolbox for cognitive evaluation. The pretraining set maximizes coverage while minimizing curation of a longitudinal, infant-centric audiovisual corpus, yielding video-utterance, image-utterance, and multi-turn conversational data that mirror infant experiences. DevCV Toolbox adapts all vision-related measures of the recently released NIH Baby Toolbox into a benchmark suite of ten multimodal tasks, covering spatial reasoning, memory, and vocabulary understanding aligned with early children's capabilities. Experimental results show that a compact model pretrained from scratch can achieve competitive performance on DevCV Toolbox, outperforming GPT-4o on some tasks. We hope the principled, unified BabyVLM-V2 framework will accelerate research in developmentally plausible pretraining of vision foundation models.
Similar Papers
BabyVLM: Data-Efficient Pretraining of VLMs Inspired by Infant Learning
CV and Pattern Recognition
Teaches computers to learn like babies.
BabyBabelLM: A Multilingual Benchmark of Developmentally Plausible Training Data
Computation and Language
Teaches computers to learn languages like babies.
Assessing the alignment between infants' visual and linguistic experience using multimodal language models
CV and Pattern Recognition
Helps babies learn words by watching and listening.