PIMfused: Near-Bank DRAM-PIM with Fused-layer Dataflow for CNN Data Transfer Optimization
By: Simei Yang , Xinyu Shi , Lu Zhao and more
Potential Business Impact:
Speeds up computer "brains" by moving work closer to memory.
Near-bank Processing-in-Memory (PIM) architectures integrate processing cores (PIMcores) close to DRAM banks to mitigate the high cost of off-chip memory accesses. When accelerating convolutional neural network (CNN) on DRAM-PIM, performance is often constrained by cross-bank (or cross-PIMcore) data transfers, which are induced by the conventional layer-by-layer dataflow that enforces inter-bank (or inter-PIMcore) dependencies across successive CNN layers. To address this challenge, we propose PIMfused, a hardware-software co-design that enables fused-layer dataflow for end-to-end CNN execution in near-bank DRAM-PIM. By adopting fused-layer dataflow, PIMfused improves data reuse and, more importantly, breaks inter-bank data dependencies, thereby optimizing cross-bank data transfers without sacrificing bank-level parallelism. We study the impact of buffer sizes and PIMcore parallelism (1-bank vs. 4-bank) on PIMfused using end-to-end ResNet18. We present three key takeaways and show that with 4-bank PIMcores, PIMfused achieves overall PPA gains over a GDDR6-AiM-like baseline, cutting memory cycles to 30.6%, energy to 83.4%, and area to 76.5%.
Similar Papers
A Tensor Compiler for Processing-In-Memory Architectures
Hardware Architecture
Makes AI models run much faster on new chips.
Membrane: Accelerating Database Analytics with Bank-Level DRAM-PIM Filtering
Hardware Architecture
Makes computers faster by doing work inside memory.
RACAM: Enhancing DRAM with Reuse-Aware Computation and Automated Mapping for ML Inference
Hardware Architecture
Makes AI models run much faster inside computer memory.