Score: 0

Layer Specialization Underlying Compositional Reasoning in Transformers

Published: October 20, 2025 | arXiv ID: 2510.17469v1

By: Jing Liu

Potential Business Impact:

Computers learn to build new ideas from old ones.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Transformers exhibit compositional reasoning on sequences not observed during training, a capability often attributed to in-context learning (ICL) and skill composition. We investigate this phenomenon using the Random Hierarchy Model (RHM), a probabilistic context-free grammar that generates sequences through recursive rule application. Models are trained on subsets of sequences and evaluated across four generalization conditions: memorization, in-distribution generalization, out-of-distribution generalization with the same rules, and cross-layer transfer. Behaviorally, performance improves systematically with task complexity and the number of in-context examples, with out-of-distribution tasks requiring substantially more examples than in-distribution scenarios. Mechanistically, we identify a progressive emergence of layer specialization during training that correlates with generalization performance. Principal component analysis and attention pattern clustering reveal that transformers develop structured, hierarchically organized representations in specialized layers. These results demonstrate that transformers develop modular, interpretable mechanisms supporting compositional reasoning, linking internal algorithmic structure to observed behavioral capabilities.

Page Count
9 pages

Category
Computer Science:
Machine Learning (CS)