Topology-Aware Virtualization over Inter-Core Connected Neural Processing Units
By: Dahu Feng , Erhu Feng , Dong Du and more
Potential Business Impact:
Makes AI chips work better for many tasks.
With the rapid development of artificial intelligence (AI) applications, an emerging class of AI accelerators, termed Inter-core Connected Neural Processing Units (NPU), has been adopted in both cloud and edge computing environments, like Graphcore IPU, Tenstorrent, etc. Despite their innovative design, these NPUs often demand substantial hardware resources, leading to suboptimal resource utilization due to the imbalance of hardware requirements across various tasks. To address this issue, prior research has explored virtualization techniques for monolithic NPUs, but has neglected inter-core connected NPUs with the hardware topology. This paper introduces vNPU, the first comprehensive virtualization design for inter-core connected NPUs, integrating three novel techniques: (1) NPU route virtualization, which redirects instruction and data flow from virtual NPU cores to physical ones, creating a virtual topology; (2) NPU memory virtualization, designed to minimize translation stalls for SRAM-centric and NoC-equipped NPU cores, thereby maximizing the memory bandwidth; and (3) Best-effort topology mapping, which determines the optimal mapping from all candidate virtual topologies, balancing resource utilization with end-to-end performance. We have developed a prototype of vNPU on both an FPGA platform (Chipyard+FireSim) and a simulator (DCRA). Evaluation results indicate that, compared to other virtualization approaches such as unified virtual memory and MIG, vNPU achieves up to a 2x performance improvement across various ML models, with only 2% hardware cost.
Similar Papers
eIQ Neutron: Redefining Edge-AI Inference with Integrated NPU and Compiler Innovations
Hardware Architecture
Makes AI on phones run much faster.
From Principles to Practice: A Systematic Study of LLM Serving on Multi-core NPUs
Hardware Architecture
Makes AI understand faster on special chips.
AutoNeural: Co-Designing Vision-Language Models for NPU Inference
Computation and Language
Makes AI see and talk faster on phones.