RACAM: Enhancing DRAM with Reuse-Aware Computation and Automated Mapping for ML Inference
By: Siyuan Ma , Jiajun Hu , Jeeho Ryoo and more
Potential Business Impact:
Makes AI models run much faster inside computer memory.
In-DRAM Processing-In-Memory (DRAM-PIM) has emerged as a promising approach to accelerate memory-intensive workloads by mitigating data transfer overhead between DRAM and the host processor. Bit-serial DRAM-PIM architectures, further enhance efficiency by supporting runtime variable data precision, which is critical for emerging workloads, such as large language model (LLM) inference. However, existing works still have major limitations: lack of data reuse, significant amounts of redundant data transfer, and insufficient support for workload mapping. To address these issues, we propose RACAM, the first in-DRAM bit-serial architecture which uses dedicated locality buffers, bit-serial PEs, popcount reduction units and broadcast units to enable data reuse and alleviate redundant data transfers. Furthermore, a workload mapping mechanism is proposed to fully explore the massive parallelism of DRAM architecture and identify the best mapping scheme of a given workload. We evaluate RACAM against GPUs and the state-of-the-art, in-DRAM PIM system, Proteus, across end-to-end LLM inferences. RACAM achieves 9x to 102x speedup over GPUs and 233x higher performance per mm2 compared to Proteus in case of GPT3.
Similar Papers
New Tools, Programming Models, and System Support for Processing-in-Memory Architectures
Hardware Architecture
Makes computer chips work faster inside memory.
DL-PIM: Improving Data Locality in Processing-in-Memory Systems
Hardware Architecture
Moves computer data closer for faster work.
PIMfused: Near-Bank DRAM-PIM with Fused-layer Dataflow for CNN Data Transfer Optimization
Hardware Architecture
Speeds up computer "brains" by moving work closer to memory.