Score: 2

A First Look at Bugs in LLM Inference Engines

Published: June 11, 2025 | arXiv ID: 2506.09713v1

By: Mugeng Liu , Siqi Zhong , Weichen Bi and more

Potential Business Impact:

Finds and fixes bugs in AI language tools.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language model-specific inference engines (in short as \emph{LLM inference engines}) have become a fundamental component of modern AI infrastructure, enabling the deployment of LLM-powered applications (LLM apps) across cloud and local devices. Despite their critical role, LLM inference engines are prone to bugs due to the immense resource demands of LLMs and the complexities of cross-platform compatibility. However, a systematic understanding of these bugs remains lacking. To bridge this gap, we present the first empirical study on bugs in LLM inference engines. We mine official repositories of 5 widely adopted LLM inference engines, constructing a comprehensive dataset of 929 real-world bugs. Through a rigorous open coding process, we analyze these bugs to uncover their symptoms, root causes, and commonality. Our findings reveal six major bug symptoms and a taxonomy of 28 root causes, shedding light on the key challenges in bug detection and location within LLM inference engines. Based on these insights, we propose a series of actionable implications for researchers, inference engine vendors, and LLM app developers.

Country of Origin
πŸ‡ΈπŸ‡¬ πŸ‡¨πŸ‡³ Singapore, China

Repos / Data Links

Page Count
12 pages

Category
Computer Science:
Software Engineering