Score: 1

Too Big to Think: Capacity, Memorization, and Generalization in Pre-Trained Transformers

Published: June 10, 2025 | arXiv ID: 2506.09099v2

By: Joshua Barron, Devin White

BigTech Affiliations: University of California, Berkeley

Potential Business Impact:

Makes computers remember facts or solve new problems.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The relationship between memorization and generalization in large language models (LLMs) remains an open area of research, with growing evidence that the two are deeply intertwined. In this work, we investigate this relationship by pre-training a series of capacity-limited Transformer models from scratch on two synthetic character-level tasks designed to separately probe generalization (via arithmetic extrapolation) and memorization (via factual recall). We observe a consistent trade-off: small models extrapolate to unseen arithmetic cases but fail to memorize facts, while larger models memorize but fail to extrapolate. An intermediate-capacity model exhibits a similar shift toward memorization. When trained on both tasks jointly, no model (regardless of size) succeeds at extrapolation. These findings suggest that pre-training may intrinsically favor one learning mode over the other. By isolating these dynamics in a controlled setting, our study offers insight into how model capacity shapes learning behavior and offers broader implications for the design and deployment of small language models.

Country of Origin
🇺🇸 United States

Page Count
12 pages

Category
Computer Science:
Machine Learning (CS)