Orchestration for Domain-specific Edge-Cloud Language Models
By: Prasoon Patidar , Alex Crown , Kevin Hsieh and more
Potential Business Impact:
Makes smart assistants faster and cheaper.
The remarkable performance of Large Language Models (LLMs) has inspired many applications, which often necessitate edge-cloud collaboration due to connectivity, privacy, and cost considerations. Traditional methods primarily focus on selecting the best LLM model for optimizing performance, while neglecting the critical interplay between the components of the LLM serving pipeline (context retrieval, query preprocessing, etc.) or the changing latency and cost constraints. We introduce ECO-LLM (Edge-Cloud Orchestrator for LLMs), a novel system that reframes this problem as a joint optimization challenge and solves it by systematically exploring component configurations and dynamically selecting optimal strategies at the query level. ECO-LLM consists of two components: (1) the ECO-LLM Emulator, which efficiently explores the vast configuration space utilizing query clustering and pareto-optimal path selection, gathering domain-specific performance metrics without exhaustive evaluation; and (2) the ECO-LLM Runtime, which leverages these metrics to dynamically select optimal resolution strategies for user queries while meeting user-defined Service Level Objectives (SLOs). We evaluate ECO-LLM on a smart home and a smart car assistant scenarios. With an exhaustive exploration of all possible configurations for seen queries, ECO-LLM outperforms cloud-based models like GPT-4o in terms of accuracy (90% vs. 74% on average) while reducing costs by 90% and latency by 55%, demonstrating the value of its joint optimization at the query level. In practical deployment for previously unseen queries, ECO-LLM selects configurations that reduce costs by 62% or improve response times by 62% on average compared to state-of-the-art model routing approaches, while maintaining higher accuracy and consistently adhering to specified latency and cost constraints.
Similar Papers
Collaborative Inference and Learning between Edge SLMs and Cloud LLMs: A Survey of Algorithms, Execution, and Open Challenges
Distributed, Parallel, and Cluster Computing
Smart computers work together for faster, private AI.
A Structure-Agnostic Co-Tuning Framework for LLMs and SLMs in Cloud-Edge Systems
Distributed, Parallel, and Cluster Computing
Lets phones and computers learn together better.
Leveraging Large Language Models to Develop Heuristics for Emerging Optimization Problems
Artificial Intelligence
AI learns to solve tricky problems faster.