Mask and You Shall Receive: Optimizing Masked Language Modeling For Pretraining BabyLMs
By: Lukas Edman, Alexander Fraser
Potential Business Impact:
Teaches computers to understand words better.
We describe our strategy for the 2025 edition of the BabyLM Challenge. Our main contribution is that of an improved form of Masked Language Modeling (MLM), which adapts the probabilities of the tokens masked according to the model's ability to predict them. The results show a substantial increase in performance on (Super)GLUE tasks over the standard MLM. We also incorporate sub-token embeddings, finding that this increases the model's morphological generalization capabilities. Our submission beats the baseline in the strict-small track.
Similar Papers
Masked Diffusion Language Models with Frequency-Informed Training
Computation and Language
Teaches computers language with less text.
Learning Dynamics of Meta-Learning in Small Model Pretraining
Computation and Language
Trains AI faster and easier to understand.
Soft-Masked Diffusion Language Models
Machine Learning (CS)
Helps computers write better code, faster.