BabyBabelLM: A Multilingual Benchmark of Developmentally Plausible Training Data
By: Jaap Jumelet , Abdellah Fourtassi , Akari Haga and more
Potential Business Impact:
Teaches computers to learn languages like babies.
We present BabyBabelLM, a multilingual collection of datasets modeling the language a person observes from birth until they acquire a native language. We curate developmentally plausible pretraining data aiming to cover the equivalent of 100M English words of content in each of 45 languages. We compile evaluation suites and train baseline models in each language. BabyBabelLM aims to facilitate multilingual pretraining and cognitive modeling.
Similar Papers
Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
Computation and Language
Teaches computers to learn language like babies.
Towards Data-Efficient Language Models: A Child-Inspired Approach to Language Learning
Computation and Language
Teaches computers to learn language like kids.
BabyVLM: Data-Efficient Pretraining of VLMs Inspired by Infant Learning
CV and Pattern Recognition
Teaches computers to learn like babies.