LLM Architecture, Scaling Laws, and Economics: A Quick Summary
By: William H. Press
Potential Business Impact:
Makes AI models cheaper and faster to build.
The current standard architecture of Large Language Models (LLMs) with QKV self-attention is briefly summarized, including the architecture of a typical Transformer. Scaling laws for compute (flops) and memory (parameters plus data) are given, along with their present (2025) rough cost estimates for the parameters of present LLMs of various scales, including discussion of whether DeepSeek should be viewed as a special case. Nothing here is new, but this material seems not otherwise readily available in summary form.
Similar Papers
Speed Always Wins: A Survey on Efficient Architectures for Large Language Models
Computation and Language
Makes AI smarter and faster to use.
Scaling Laws Meet Model Architecture: Toward Inference-Efficient LLMs
Machine Learning (CS)
Makes AI smarter and faster to use.
Large Language Model Scaling Laws for Neural Quantum States in Quantum Chemistry
Machine Learning (CS)
Makes quantum computers learn faster and better.