Score: 2

Language Model Behavioral Phases are Consistent Across Architecture, Training Data, and Scale

Published: October 28, 2025 | arXiv ID: 2510.24963v1

By: James A. Michaelov, Roger P. Levy, Benjamin K. Bergen

BigTech Affiliations: Massachusetts Institute of Technology

Potential Business Impact:

Makes AI understand words by how often they're used.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We show that across architecture (Transformer vs. Mamba vs. RWKV), training dataset (OpenWebText vs. The Pile), and scale (14 million parameters to 12 billion parameters), autoregressive language models exhibit highly consistent patterns of change in their behavior over the course of pretraining. Based on our analysis of over 1,400 language model checkpoints on over 110,000 tokens of English, we find that up to 98% of the variance in language model behavior at the word level can be explained by three simple heuristics: the unigram probability (frequency) of a given word, the $n$-gram probability of the word, and the semantic similarity between the word and its context. Furthermore, we see consistent behavioral phases in all language models, with their predicted probabilities for words overfitting to those words' $n$-gram probabilities for increasing $n$ over the course of training. Taken together, these results suggest that learning in neural language models may follow a similar trajectory irrespective of model details.

Country of Origin
🇺🇸 United States


Page Count
41 pages

Category
Computer Science:
Computation and Language