LUT-LLM: Efficient Large Language Model Inference with Memory-based Computations on FPGAs
By: Zifan He , Shengyu Ye , Rui Ma and more
Potential Business Impact:
Makes AI run faster and use less power.
The rapid progress of large language models (LLMs) has advanced numerous applications, yet efficient single-batch inference remains vital for on-device intelligence. While FPGAs offer fine-grained data control and high energy efficiency, recent GPU optimizations have narrowed their advantage, especially under arithmetic-based computation. To overcome this, we leverage FPGAs' abundant on-chip memory to shift LLM inference from arithmetic- to memory-based computation through table lookups. We present LUT-LLM, the first FPGA accelerator enabling 1B+ LLM inference via vector-quantized memory operations. Our analysis identifies activation-weight co-quantization as the most effective scheme, supported by (1) bandwidth-aware parallel centroid search, (2) efficient 2D table lookups, and (3) a spatial-temporal hybrid design minimizing data caching. Implemented on an AMD V80 FPGA for a customized Qwen 3 1.7B model, LUT-LLM achieves 1.66x lower latency than AMD MI210 and 1.72x higher energy efficiency than NVIDIA A100, scaling to 32B models with 2.16x efficiency gain over A100.
Similar Papers
ELUTQ: Efficient LUT-Aware Quantization for Deploying Large Language Models on Edge Devices
Machine Learning (CS)
Makes smart AI run on phones, faster and smaller.
SAIL: SRAM-Accelerated LLM Inference System with Lookup-Table-based GEMV
Hardware Architecture
Makes AI smarter on regular computers.
F-BFQ: Flexible Block Floating-Point Quantization Accelerator for LLMs
Hardware Architecture
Makes smart computer programs run faster on phones.