Score: 0

The Future of Memory: Limits and Opportunities

Published: August 28, 2025 | arXiv ID: 2508.20425v1

By: Shuhan Liu , Samuel Dayo , Peijing Li and more

Potential Business Impact:

Makes computers faster by putting memory closer.

Business Areas:
Hardware Hardware

Memory latency, bandwidth, capacity, and energy increasingly limit performance. In this paper, we reconsider proposed system architectures that consist of huge (many-terabyte to petabyte scale) memories shared among large numbers of CPUs. We argue two practical engineering challenges, scaling and signaling, limit such designs. We propose the opposite approach. Rather than create large, shared, homogenous memories, systems explicitly break memory up into smaller slices more tightly coupled with compute elements. Leveraging advances in 2.5D/3D integration, this compute-memory node provisions private local memory, enabling accesses of node-exclusive data through micrometer-scale distances, and dramatically reduced access cost. In-package memory elements support shared state within a processor, providing far better bandwidth and energy-efficiency than DRAM, which is used as main memory for large working sets and cold data. Hardware making memory capacities and distances explicit allows software to efficiently compose this hierarchy, managing data placement and movement.

Page Count
3 pages

Category
Computer Science:
Hardware Architecture