SLED: A Speculative LLM Decoding Framework for Efficient Edge Serving
By: Xiangchen Li , Dimitrios Spatharakis , Saeid Ghafouri and more
Potential Business Impact:
Lets small computers run big AI models faster.
The growing gap between the increasing complexity of large language models (LLMs) and the limited computational budgets of edge devices poses a key challenge for efficient on-device inference, despite gradual improvements in hardware capabilities. Existing strategies, such as aggressive quantization, pruning, or remote inference, trade accuracy for efficiency or lead to substantial cost burdens. This position paper introduces a new framework that leverages speculative decoding, previously viewed primarily as a decoding acceleration technique for autoregressive generation of LLMs, as a promising approach specifically adapted for edge computing by orchestrating computation across heterogeneous devices. We propose \acronym, a framework that allows lightweight edge devices to draft multiple candidate tokens locally using diverse draft models, while a single, shared edge server verifies the tokens utilizing a more precise target model. To further increase the efficiency of verification, the edge server batch the diverse verification requests from devices. This approach supports device heterogeneity and reduces server-side memory footprint by sharing the same upstream target model across multiple devices. Our initial experiments with Jetson Orin Nano, Raspberry Pi 4B/5, and an edge server equipped with 4 Nvidia A100 GPUs indicate substantial benefits: 2.2 more system throughput, 2.8 more system capacity, and better cost efficiency, all without sacrificing model accuracy.
Similar Papers
Efficient LLM Inference over Heterogeneous Edge Networks with Speculative Decoding
Systems and Control
Makes AI answer questions much faster.
Fast and Cost-effective Speculative Edge-Cloud Decoding with Early Exits
Robotics
Makes smart devices run AI faster and cheaper.
SpecEdge: Scalable Edge-Assisted Serving Framework for Interactive LLMs
Computation and Language
Makes AI run faster and cheaper everywhere.