Quality-of-Service Aware LLM Routing for Edge Computing with Multiple Experts
By: Jin Yang , Qiong Wu , Zhiying Feng and more
Potential Business Impact:
Speeds up AI responses while keeping data private.
Large Language Models (LLMs) have demonstrated remarkable capabilities, leading to a significant increase in user demand for LLM services. However, cloud-based LLM services often suffer from high latency, unstable responsiveness, and privacy concerns. Therefore, multiple LLMs are usually deployed at the network edge to boost real-time responsiveness and protect data privacy, particularly for many emerging smart mobile and IoT applications. Given the varying response quality and latency of LLM services, a critical issue is how to route user requests from mobile and IoT devices to an appropriate LLM service (i.e., edge LLM expert) to ensure acceptable quality-of-service (QoS). Existing routing algorithms fail to simultaneously address the heterogeneity of LLM services, the interference among requests, and the dynamic workloads necessary for maintaining long-term stable QoS. To meet these challenges, in this paper we propose a novel deep reinforcement learning (DRL)-based QoS-aware LLM routing framework for sustained high-quality LLM services. Due to the dynamic nature of the global state, we propose a dynamic state abstraction technique to compactly represent global state features with a heterogeneous graph attention network (HAN). Additionally, we introduce an action impact estimator and a tailored reward function to guide the DRL agent in maximizing QoS and preventing latency violations. Extensive experiments on both Poisson and real-world workloads demonstrate that our proposed algorithm significantly improves average QoS and computing resource efficiency compared to existing baselines.
Similar Papers
Dynamic Quality-Latency Aware Routing for LLM Inference in Wireless Edge-Device Networks
Information Theory
Makes smart assistants answer faster and better.
Adaptive LLM Routing under Budget Constraints
Machine Learning (CS)
Chooses best AI for your question.
Adaptive LLM Routing under Budget Constraints
Machine Learning (CS)
Chooses best AI for your question, saving time.