Hybrid Systolic Array Accelerator with Optimized Dataflow for Edge Large Language Model Inference
By: Chun-Ting Chen , HanGyeol Mun , Jian Meng and more
Potential Business Impact:
Makes smart AI run faster on phones.
Edge inference for large language models (LLM) offers secure, low-latency, and cost-effective inference solutions. We emphasize that an edge accelerator should achieve high area efficiency and minimize external memory access (EMA) during the memory-bound decode stage, while maintaining high energy efficiency during the compute intensive prefill stage. This paper proposes an edge LLM inference accelerator featuring a hybrid systolic array (HSA) architecture that optimizes inference efficiency in both stages. To further reduce EMA, we adopt MXINT4 weight quantization and propose an optimized dataflow tailored for HSA, ensuring negligible dequantization overhead and achieving 100% hardware utilization with minimal accuracy loss under edge DRAM bandwidth constraints. For non-linear operations, we incorporate optimized root mean square normalization (RMSNorm) and rotary position embedding (RoPE) units, reducing their latency, area, and memory access overhead while enabling end-to-end inference on our accelerator. Our solution achieves 247/117 (token/s/mm2) while running a 1.3B LLM on long-input/long-output scenarios, providing >2.45x/13.5x improvement over existing approaches, while maintaining superior energy efficiency in token generation.
Similar Papers
EdgeProfiler: A Fast Profiling Framework for Lightweight LLMs on Edge Using Analytical Model
Distributed, Parallel, and Cluster Computing
Makes smart computer programs run on small devices.
HALO: Memory-Centric Heterogeneous Accelerator with 2.5D Integration for Low-Batch LLM Inference
Hardware Architecture
Makes AI chatbots answer questions much faster.
Understanding the Performance and Power of LLM Inferencing on Edge Accelerators
Distributed, Parallel, and Cluster Computing
Runs smart AI on small computers, not just big ones.