Mask and You Shall Receive: Optimizing Masked Language Modeling For Pretraining BabyLMs
By: Lukas Edman, Alexander Fraser
Potential Business Impact:
Teaches computers to understand words better.
We describe our strategy for the 2025 edition of the BabyLM Challenge. Our main contribution is that of an improved form of Masked Language Modeling (MLM), which adapts the probabilities of the tokens masked according to the model's ability to predict them. The results show a substantial increase in performance on (Super)GLUE tasks over the standard MLM. We also incorporate sub-token embeddings, finding that this increases the model's morphological generalization capabilities. Our submission beats the baseline in the strict-small track.
Similar Papers
Masked Diffusion Language Models with Frequency-Informed Training
Computation and Language
Teaches computers language with less text.
Linguistic Entity Masking to Improve Cross-Lingual Representation of Multilingual Language Models for Low-Resource Languages
Computation and Language
Helps computers understand rare languages better.
ExLM: Rethinking the Impact of [MASK] Tokens in Masked Language Models
Computation and Language
Helps computers understand words better by fixing confusion.