Efficient Routing of Inference Requests across LLM Instances in Cloud-Edge Computing
By: Shibo Yu, Mohammad Goudarzi, Adel Nadjaran Toosi
Potential Business Impact:
Makes AI answer questions faster and cheaper.
The rising demand for Large Language Model (LLM) inference services has intensified pressure on computational resources, resulting in latency and cost challenges. This paper introduces a novel routing algorithm based on the Non-dominated Sorting Genetic Algorithm II (NSGA-II) to distribute inference requests across heterogeneous LLM instances in a cloud-edge computing environment. Formulated as a multi-objective optimization problem, the algorithm balances response quality, response time, and inference cost, adapting to request heterogeneity (e.g., varying complexity and prompt lengths) and node diversity (e.g., edge vs. cloud resources). This adaptive routing algorithm optimizes performance under dynamic workloads. We benchmark the approach using a testbed with datasets including Stanford Question Answering Dataset (SQuAD), Mostly Basic Python Problems (MBPP), Hella Situations With Adversarial Generations (HellaSwag), and Grade School Math 8K (GSM8K). Experimental results show our solution, compared to the baselines, achieves up to 95.2% and 34.9% improvements in terms of response time and cost, respectively. These findings validate the algorithm's effectiveness for scalable LLM deployments.
Similar Papers
Quality-of-Service Aware LLM Routing for Edge Computing with Multiple Experts
Networking and Internet Architecture
Speeds up AI responses while keeping data private.
Dynamic Quality-Latency Aware Routing for LLM Inference in Wireless Edge-Device Networks
Information Theory
Makes smart assistants answer faster and better.
Towards Efficient Multi-LLM Inference: Characterization and Analysis of LLM Routing and Hierarchical Techniques
Machine Learning (CS)
Lets smart computers use less power.