Too Big to Think: Capacity, Memorization, and Generalization in Pre-Trained Transformers
By: Joshua Barron, Devin White
Potential Business Impact:
Makes computers remember facts or solve new problems.
The relationship between memorization and generalization in large language models (LLMs) remains an open area of research, with growing evidence that the two are deeply intertwined. In this work, we investigate this relationship by pre-training a series of capacity-limited Transformer models from scratch on two synthetic character-level tasks designed to separately probe generalization (via arithmetic extrapolation) and memorization (via factual recall). We observe a consistent trade-off: small models extrapolate to unseen arithmetic cases but fail to memorize facts, while larger models memorize but fail to extrapolate. An intermediate-capacity model exhibits a similar shift toward memorization. When trained on both tasks jointly, no model (regardless of size) succeeds at extrapolation. These findings suggest that pre-training may intrinsically favor one learning mode over the other. By isolating these dynamics in a controlled setting, our study offers insight into how model capacity shapes learning behavior and offers broader implications for the design and deployment of small language models.
Similar Papers
Capacity Matters: a Proof-of-Concept for Transformer Memorization on Real-World Data
Computation and Language
Makes AI remember more by changing its brain.
Memory Limitations of Prompt Tuning in Transformers
Machine Learning (CS)
Computers forget things when given too much information.
Mitigating Catastrophic Forgetting in Continual Learning through Model Growth
Computation and Language
Keeps AI smart when learning new things.