NeurIPS 2025 E2LM Competition : Early Training Evaluation of Language Models
By: Mouadh Yagoubi , Yasser Dahou , Billel Mokeddem and more
Potential Business Impact:
Tests how well new AI learns facts early on.
Existing benchmarks have proven effective for assessing the performance of fully trained large language models. However, we find striking differences in the early training stages of small models, where benchmarks often fail to provide meaningful or discriminative signals. To explore how these differences arise, this competition tackles the challenge of designing scientific knowledge evaluation tasks specifically tailored for measuring early training progress of language models. Participants are invited to develop novel evaluation methodologies or adapt existing benchmarks to better capture performance differences among language models. To support this effort, we provide three pre-trained small models (0.5B, 1B, and 3B parameters), along with intermediate checkpoints sampled during training up to 200B tokens. All experiments and development work can be run on widely available free cloud-based GPU platforms, making participation accessible to researchers with limited computational resources. Submissions will be evaluated based on three criteria: the quality of the performance signal they produce, the consistency of model rankings at 1 trillion tokens of training, and their relevance to the scientific knowledge domain. By promoting the design of tailored evaluation strategies for early training, this competition aims to attract a broad range of participants from various disciplines, including those who may not be machine learning experts or have access to dedicated GPU resources. Ultimately, this initiative seeks to make foundational LLM research more systematic and benchmark-informed from the earliest phases of model development.
Similar Papers
NeurIPS 2023 LLM Efficiency Fine-tuning Competition
Computation and Language
Makes AI smarter by cleaning its learning data.
Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
Computation and Language
Teaches computers to learn language like babies.
IberBench: LLM Evaluation on Iberian Languages
Computation and Language
Tests AI language skills in many Spanish-speaking countries.