Assessing the alignment between infants' visual and linguistic experience using multimodal language models
By: Alvin Wei Ming Tan , Jane Yang , Tarun Sepuri and more
Potential Business Impact:
Helps babies learn words by watching and listening.
Figuring out which objects or concepts words refer to is a central language learning challenge for young children. Most models of this process posit that children learn early object labels from co-occurrences of words and their referents that occur when someone around them talks about an object in the immediate physical environment. But how aligned in time are children's visual and linguistic experiences during everyday learning? To date, answers to this question have been limited by the need for labor-intensive manual annotations of vision-language co-occurrences. Here, we evaluate the use of contrastive language-image pretraining (CLIP) models to automatically characterize vision-language alignment in egocentric videos taken from the infant perspective in home environments. After validating CLIP alignment scores using human alignment judgments, we apply this metric to a large corpus of infant-perspective videos. We show that idealized aligned moments for learning (e.g., "look at the ball" with a ball present in the child's view) are relatively rare in children's everyday experiences compared to modern machine learning datasets, and highlight variability in alignment both within and across children. These findings suggest that infrequent alignment is a constraint for models describing early word learning and offer a new method for investigating children's multimodal environment.
Similar Papers
uCLIP: Parameter-Efficient Multilingual Extension of Vision-Language Models with Unpaired Data
CV and Pattern Recognition
Helps computers understand pictures in many languages.
Contrastive vision-language learning with paraphrasing and negation
CV and Pattern Recognition
Teaches computers to understand words that change meaning.
Language-Guided Invariance Probing of Vision-Language Models
CV and Pattern Recognition
Tests if AI understands words that mean the same thing.