Layer Specialization Underlying Compositional Reasoning in Transformers
By: Jing Liu
Potential Business Impact:
Computers learn to build new ideas from old ones.
Transformers exhibit compositional reasoning on sequences not observed during training, a capability often attributed to in-context learning (ICL) and skill composition. We investigate this phenomenon using the Random Hierarchy Model (RHM), a probabilistic context-free grammar that generates sequences through recursive rule application. Models are trained on subsets of sequences and evaluated across four generalization conditions: memorization, in-distribution generalization, out-of-distribution generalization with the same rules, and cross-layer transfer. Behaviorally, performance improves systematically with task complexity and the number of in-context examples, with out-of-distribution tasks requiring substantially more examples than in-distribution scenarios. Mechanistically, we identify a progressive emergence of layer specialization during training that correlates with generalization performance. Principal component analysis and attention pattern clustering reveal that transformers develop structured, hierarchically organized representations in specialized layers. These results demonstrate that transformers develop modular, interpretable mechanisms supporting compositional reasoning, linking internal algorithmic structure to observed behavioral capabilities.
Similar Papers
Out-of-distribution Tests Reveal Compositionality in Chess Transformers
Machine Learning (CS)
Computer learns chess rules, plays new ways.
Scaling Laws and Representation Learning in Simple Hierarchical Languages: Transformers vs. Convolutional Architectures
Machine Learning (CS)
Makes AI learn language structure faster.
Are Transformers Able to Reason by Connecting Separated Knowledge in Training Data?
Artificial Intelligence
Computers learn to connect ideas like humans.