Edge-First Language Model Inference: Models, Metrics, and Tradeoffs
By: SiYoung Jang, Roberto Morabito
Potential Business Impact:
Lets small AI models run on phones.
The widespread adoption of Language Models (LMs) across industries is driving interest in deploying these services across the computing continuum, from the cloud to the network edge. This shift aims to reduce costs, lower latency, and improve reliability and privacy. Small Language Models (SLMs), enabled by advances in model compression, are central to this shift, offering a path to on-device inference on resource-constrained edge platforms. This work examines the interplay between edge and cloud deployments, starting from detailed benchmarking of SLM capabilities on single edge devices, and extending to distributed edge clusters. We identify scenarios where edge inference offers comparable performance with lower costs, and others where cloud fallback becomes essential due to limits in scalability or model capacity. Rather than proposing a one-size-fits-all solution, we present platform-level comparisons and design insights for building efficient, adaptive LM inference systems across heterogeneous environments.
Similar Papers
Sometimes Painful but Certainly Promising: Feasibility and Trade-offs of Language Model Inference at the Edge
Machine Learning (CS)
Makes smart computer programs run on phones.
Collaborative Inference and Learning between Edge SLMs and Cloud LLMs: A Survey of Algorithms, Execution, and Open Challenges
Distributed, Parallel, and Cluster Computing
Smart computers work together for faster, private AI.
CE-LSLM: Efficient Large-Small Language Model Inference and Communication via Cloud-Edge Collaboration
Networking and Internet Architecture
Lets phones do smart AI tasks without slow internet.