Score: 0

LLM Architecture, Scaling Laws, and Economics: A Quick Summary

Published: September 11, 2025 | arXiv ID: 2511.11572v1

By: William H. Press

Potential Business Impact:

Makes AI models cheaper and faster to build.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The current standard architecture of Large Language Models (LLMs) with QKV self-attention is briefly summarized, including the architecture of a typical Transformer. Scaling laws for compute (flops) and memory (parameters plus data) are given, along with their present (2025) rough cost estimates for the parameters of present LLMs of various scales, including discussion of whether DeepSeek should be viewed as a special case. Nothing here is new, but this material seems not otherwise readily available in summary form.

Page Count
9 pages

Category
Computer Science:
General Literature