Score: 0

A Systematic Characterization of LLM Inference on GPUs

Published: December 1, 2025 | arXiv ID: 2512.01644v1

By: Haonan Wang , Xuxin Xiao , Mingyu Yan and more

Potential Business Impact:

Makes AI understand and work much faster.

Business Areas:
GPU Hardware

This work presents a systematic characterization of Large Language Model (LLM) inference to address fragmented understanding. Through comprehensive experiments, we establish a four-dimensional analytical framework: (1) Two-Phase Heterogeneity Observation; (2) Microarchitectural Root Cause Analysis; (3) System Scaling Principles; and (4) Emerging Paradigm Boundaries. Our investigation progresses systematically from observation to foresight: identifying performance phenomena, revealing hardware causes, validating system behavior, and exploring new paradigms. This study not only consolidates a reliable empirical foundation for existing research but also provides new discoveries and practical optimization guidance for LLM inference.

Country of Origin
🇨🇳 China

Page Count
23 pages

Category
Computer Science:
Hardware Architecture