UMDAM: A Unified Data Layout and DRAM Address Mapping for Heterogenous NPU-PIM
By: Hai Huang , Xuhong Qiang , Weisheng Zhao and more
Potential Business Impact:
Makes AI on phones run much faster.
Large Language Models (LLMs) are increasingly deployed on edge devices with Neural Processing Units (NPUs), yet the decode phase remains memory-intensive, limiting performance. Processing-in-Memory (PIM) offers a promising solution, but co-executing NPU-PIM systems face challenges such as data layout mismatches, bandwidth loss, and redundant storage. To address these issues, we propose UMDAM, a unified memory-affinity data layout and DRAM address mapping scheme tailored for NPU-PIM co-execution. UMDAM employs a column-major, tile-based layout and a configurable DRAM mapping strategy to ensure compatibility with NPU computation while maximizing PIM efficiency -- without introducing extra memory overhead or bandwidth loss. Comprehensive evaluations on OPT models demonstrate that UMDAM reduces time-to-first-token (TTFT) by up to 3.0x and time-to-last-token (TTLT) by 2.18x, significantly improving end-to-end LLM inference efficiency on edge devices.
Similar Papers
UMDAM: A Unified Data Layout and DRAM Address Mapping for Heterogenous NPU-PIM
Distributed, Parallel, and Cluster Computing
Makes AI on phones run much faster.
New Tools, Programming Models, and System Support for Processing-in-Memory Architectures
Hardware Architecture
Makes computer chips work faster inside memory.
DL-PIM: Improving Data Locality in Processing-in-Memory Systems
Hardware Architecture
Moves computer data closer for faster work.