FPGA Co-Design for Efficient N:M Sparse and Quantized Model Inference
By: Fen-Yu Hsieh , Yun-Chang Teng , Ding-Yong Hong and more
Large language models (LLMs) have demonstrated remarkable performance across a wide range of language processing tasks. However, this success comes at the cost of substantial computation and memory requirements, which significantly impedes their deployment in resource-constrained environments. To address this challenge, this work introduces an automation framework that leverages weight pruning and low-bit quantization, and presents a hardware-software co-design method that generates accelerators on the Field-Programmable Gate Array (FPGA) platform. In particular, we implement a unified pipeline that applies N:M structured pruning and 4-bit integer quantization to reduce the memory footprint, followed by optimized dequantization and matrix multiplication to enhance LLM inference on several hardware platforms, including CPUs, NVIDIA GPUs with Dense and 2:4 Sparse Tensor Cores, and a custom systolic-array-based FPGA accelerator. Utilizing 2:4 sparsity combined with quantization on $4096 \times 4096$ matrices, our approach achieves a reduction of up to $4\times$ in weight storage and a $1.71\times$ speedup in matrix multiplication, yielding a $1.29\times$ end-to-end latency reduction compared to dense GPU baselines. Scaling analysis on the LLaMA-7B model further shows that structured sparsity enhances the throughput per token by $1.36\times$. These results demonstrate the synergy of fine-grained N:M sparsity and quantization for enabling efficient and deployable LLM inference, while the proposed FPGA accelerator offers a flexible architectural path for supporting a broader class of sparsity patterns beyond the fixed 2:4 hardware constraints.
Similar Papers
Enabling Dynamic Sparsity in Quantized LLM Inference
Distributed, Parallel, and Cluster Computing
Makes smart computer programs run faster on phones.
LUT-LLM: Efficient Large Language Model Inference with Memory-based Computations on FPGAs
Hardware Architecture
Makes AI run faster and use less power.
LogicSparse: Enabling Engine-Free Unstructured Sparsity for Quantised Deep-learning Accelerators
Hardware Architecture
Makes smart devices run faster using less power.