Learning Through Little Eyes: Attribute Discrimination Beyond Objects
By: Patrick Batsell, Tsutsui Satoshi, Bihan Wen
Infants learn to recognize not only object categories but also fine grained attributes such as color, size, and texture within their first two years of life. Prior work explores Childs View for Contrastive Learning (CVCL), a CLIP style model trained on infant egocentric video as a computational model of early infant learning, but it focuses only on class level recognition. This leaves it unclear whether infant scale learning also supports attribute discrimination. To address this, we introduce a benchmark that systematically varies color, size, and texture, allowing controlled tests of within class attribute recognition. Comparing CVCL with CLIP shows clear differences. CVCL is better at size discrimination, while CLIP achieves higher accuracy on color discrimination. Both models represent texture in image embeddings but fail to ground texture linguistically, suggesting a gap between visual and language spaces.
Similar Papers
A solution to generalized learning from small training sets found in everyday infant experiences
CV and Pattern Recognition
Teaches computers to learn like babies.
Learning to See Through a Baby's Eyes: Early Visual Diets Enable Robust Visual Intelligence in Humans and Machines
CV and Pattern Recognition
Teaches computers to see like babies.
Discovering Hidden Visual Concepts Beyond Linguistic Input in Infant Learning
CV and Pattern Recognition
Computers learn to see beyond words.