Physics of Language Models: Part 4.1, Architecture Design and the Magic of Canon Layers
By: Zeyuan Allen-Zhu
Potential Business Impact:
Makes AI understand things better by sharing info.
Understanding architectural differences in language models is challenging, especially at academic-scale pretraining (e.g., 1.3B parameters, 100B tokens), where results are often dominated by noise and randomness. To overcome this, we introduce controlled synthetic pretraining tasks that isolate and evaluate core model capabilities. Within this framework, we discover CANON LAYERS: lightweight architectural components -- named after the musical term "canon" -- that promote horizontal information flow across neighboring tokens. Canon layers compute weighted sums of nearby token representations and integrate seamlessly into Transformers, linear attention, state-space models, or any sequence architecture. We present 12 key results. This includes how Canon layers enhance reasoning depth (e.g., by $2\times$), reasoning breadth, knowledge manipulation, etc. They lift weak architectures like NoPE to match RoPE, and linear attention to rival SOTA linear models like Mamba2/GDN -- validated both through synthetic tasks and real-world academic-scale pretraining. This synthetic playground offers an economical, principled path to isolate core model capabilities often obscured at academic scales. Equipped with infinite high-quality data, it may even PREDICT how future architectures will behave as training pipelines improve -- e.g., through better data curation or RL-based post-training -- unlocking deeper reasoning and hierarchical inference.
Similar Papers
A Survey on Large Language Models with some Insights on their Capabilities and Limitations
Computation and Language
Computers learn to think and solve problems.
Speed Always Wins: A Survey on Efficient Architectures for Large Language Models
Computation and Language
Makes AI smarter and faster to use.
Layer Specialization Underlying Compositional Reasoning in Transformers
Machine Learning (CS)
Computers learn to build new ideas from old ones.