Taming the Titans: A Survey of Efficient LLM Inference Serving
By: Ranran Zhen , Juntao Li , Yixin Ji and more
Potential Business Impact:
Makes AI answer questions much faster.
Large Language Models (LLMs) for Generative AI have achieved remarkable progress, evolving into sophisticated and versatile tools widely adopted across various domains and applications. However, the substantial memory overhead caused by their vast number of parameters, combined with the high computational demands of the attention mechanism, poses significant challenges in achieving low latency and high throughput for LLM inference services. Recent advancements, driven by groundbreaking research, have significantly accelerated progress in this field. This paper provides a comprehensive survey of these methods, covering fundamental instance-level approaches, in-depth cluster-level strategies, emerging scenario directions, and other miscellaneous but important areas. At the instance level, we review model placement, request scheduling, decoding length prediction, storage management, and the disaggregation paradigm. At the cluster level, we explore GPU cluster deployment, multi-instance load balancing, and cloud service solutions. For emerging scenarios, we organize the discussion around specific tasks, modules, and auxiliary methods. To ensure a holistic overview, we also highlight several niche yet critical areas. Finally, we outline potential research directions to further advance the field of LLM inference serving.
Similar Papers
High-Throughput LLM inference on Heterogeneous Clusters
Distributed, Parallel, and Cluster Computing
Makes AI answer questions much faster on different computers.
Towards Efficient Multi-LLM Inference: Characterization and Analysis of LLM Routing and Hierarchical Techniques
Machine Learning (CS)
Lets smart computers use less power.
A Survey on Inference Engines for Large Language Models: Perspectives on Optimization and Efficiency
Computation and Language
Makes smart computer programs run faster and cheaper.