Score: 1

Efficient LLM Inference over Heterogeneous Edge Networks with Speculative Decoding

Published: October 13, 2025 | arXiv ID: 2510.11331v1

By: Bingjie Zhu , Zhixiong Chen , Liqiang Zhao and more

Potential Business Impact:

Makes AI answer questions much faster.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language model (LLM) inference at the network edge is a promising serving paradigm that leverages distributed edge resources to run inference near users and enhance privacy. Existing edge-based LLM inference systems typically adopt autoregressive decoding (AD), which only generates one token per forward pass. This iterative process, compounded by the limited computational resources of edge nodes, results in high serving latency and constrains the system's ability to support multiple users under growing demands.To address these challenges, we propose a speculative decoding (SD)-based LLM serving framework that deploys small and large models across heterogeneous edge nodes to collaboratively deliver inference services. Specifically, the small model rapidly generates draft tokens that the large model verifies in parallel, enabling multi-token generation per forward pass and thus reducing serving latency. To improve resource utilization of edge nodes, we incorporate pipeline parallelism to overlap drafting and verification across multiple inference tasks. Based on this framework, we analyze and derive a comprehensive latency model incorporating both communication and inference latency. Then, we formulate a joint optimization problem for speculation length, task batching, and wireless communication resource allocation to minimize total serving latency. To address this problem, we derive the closed-form solutions for wireless communication resource allocation, and develop a dynamic programming algorithm for joint batching and speculation control strategies. Experimental results demonstrate that the proposed framework achieves lower serving latency compared to AD-based serving systems. In addition,the proposed joint optimization method delivers up to 44.9% latency reduction compared to benchmark schemes.

Country of Origin
🇰🇷 🇨🇳 🇬🇧 Korea, Republic of, United Kingdom, China

Page Count
13 pages

Category
Electrical Engineering and Systems Science:
Systems and Control