Score: 1

Scaling Intelligence: Designing Data Centers for Next-Gen Language Models

Published: June 17, 2025 | arXiv ID: 2506.15006v3

By: Jesmin Jahan Tithi , Hanjiang Wu , Avishaii Abuhatzera and more

BigTech Affiliations: Intel

Potential Business Impact:

Builds faster, cheaper computer centers for huge AI.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The explosive growth of Large Language Models (LLMs), such as GPT-4 with 1.8 trillion parameters, demands a fundamental rethinking of data center architecture to ensure scalability, efficiency, and cost-effectiveness. Our work provides a comprehensive co-design framework that jointly explores FLOPS, HBM bandwidth and capacity, multiple network topologies (two-tier vs. FullFlat optical), the size of the scale-out domain, and popular parallelism/optimization strategies used in LLMs. We introduce and evaluate FullFlat network architectures, which provide uniform high-bandwidth, low-latency connectivity between all nodes, and demonstrate their transformative impact on performance and scalability. Through detailed sensitivity analyses, we quantify the benefits of overlapping compute and communication, leveraging hardware-accelerated collectives, widening the scale-out domain, and increasing memory capacity. Our study spans both sparse (mixture of experts) and dense transformer-based LLMs, revealing how system design choices affect Model FLOPS Utilization (MFU = Model FLOPS per token * Observed tokens per second / Peak FLOPS of the hardware) and overall throughput. For the co-design study, we utilized an analytical performance modeling tool capable of predicting LLM runtime within 10% of real-world measurements. Our findings offer actionable insights and a practical roadmap for designing AI data centers that can efficiently support trillion-parameter models, reduce optimization complexity, and sustain the rapid evolution of AI capabilities.

Country of Origin
🇺🇸 United States

Page Count
14 pages

Category
Computer Science:
Hardware Architecture