Argus: Token Aware Distributed LLM Inference Optimization
By: Panlong Wu , Yifei Zhong , Danyang Chen and more
Potential Business Impact:
Makes AI answer questions faster on different devices.
Large Language Models (LLMs) are rapidly being integrated into real-world applications, yet their autoregressive architectures introduce significant inference time variability, especially when deployed across heterogeneous edge-cloud systems. Existing solutions largely neglect the dynamic, stochastic, and heterogeneous nature of such environments, often ignoring the impact of variable output token lengths and device diversity. In this work, we present Argus, the first token-aware distributed edge-cloud LLM inference framework that conducts efficient task offloading. Argus features a Length-Aware Semantics (LAS) module, which predicts output token lengths for incoming prompts using a fine-tuned language model with token-length-sensitive feature modulation, enabling precise estimation. Building on this, our Lyapunov-guided Offloading Optimization (LOO) module formulates long-term Quality-of-Experience optimization that explicitly considers both LLM prefilling and decoding costs. We introduce a novel Iterative Offloading Algorithm with Damping and Congestion Control (IODCC) to effectively solve the resulting integer nonlinear programming problem under time-varying constraints. Extensive theoretical and empirical evaluations demonstrate that Argus achieves robust performance and superior efficiency in highly dynamic, heterogeneous settings.
Similar Papers
HybridFlow: Adaptive Task Scheduling for Fast and Token-Efficient LLM Inference in Edge-Cloud Collaboration
Distributed, Parallel, and Cluster Computing
Splits smart tasks between phone and cloud.
Distributed On-Device LLM Inference With Over-the-Air Computation
Distributed, Parallel, and Cluster Computing
Lets phones run smart AI without internet.
Efficient LLM Inference over Heterogeneous Edge Networks with Speculative Decoding
Systems and Control
Makes AI answer questions much faster.