Score: 0

DSD: A Distributed Speculative Decoding Solution for Edge-Cloud Agile Large Model Serving

Published: November 26, 2025 | arXiv ID: 2511.21669v1

By: Fengze Yu , Leshu Li , Brad McDanel and more

Potential Business Impact:

Makes AI talk faster on many devices.

Business Areas:
Cloud Computing Internet Services, Software

Large language model (LLM) inference often suffers from high decoding latency and limited scalability across heterogeneous edge-cloud environments. Existing speculative decoding (SD) techniques accelerate token generation but remain confined to single-node execution. We propose DSD, a distributed speculative decoding framework that extends SD to multi-device deployments through coordinated draft-target execution. Given the lack of prior work on simulating this paradigm, we first introduce DSD-Sim, a discrete-event simulator that captures network, batching, and scheduling dynamics. Building on insights from DSD-Sim, we further design an Adaptive Window Control (AWC) policy that dynamically adjusts speculation window size to optimize throughput. Experiments across diverse workloads show that DSD achieves up to 1.1x speedup and 9.7% higher throughput over existing SD baselines, enabling agile and scalable LLM serving across edge and cloud.

Country of Origin
🇺🇸 United States

Page Count
13 pages

Category
Computer Science:
Machine Learning (CS)