Collaborative Inference and Learning between Edge SLMs and Cloud LLMs: A Survey of Algorithms, Execution, and Open Challenges
By: Senyao Li , Haozhao Wang , Wenchao Xu and more
Potential Business Impact:
Smart computers work together for faster, private AI.
As large language models (LLMs) evolve, deploying them solely in the cloud or compressing them for edge devices has become inadequate due to concerns about latency, privacy, cost, and personalization. This survey explores a collaborative paradigm in which cloud-based LLMs and edge-deployed small language models (SLMs) cooperate across both inference and training. We present a unified taxonomy of edge-cloud collaboration strategies. For inference, we categorize approaches into task assignment, task division, and mixture-based collaboration at both task and token granularity, encompassing adaptive scheduling, resource-aware offloading, speculative decoding, and modular routing. For training, we review distributed adaptation techniques, including parameter alignment, pruning, bidirectional distillation, and small-model-guided optimization. We further summarize datasets, benchmarks, and deployment cases, and highlight privacy-preserving methods and vertical applications. This survey provides the first systematic foundation for LLM-SLM collaboration, bridging system and algorithm co-design to enable efficient, scalable, and trustworthy edge-cloud intelligence.
Similar Papers
A Survey on Collaborative Mechanisms Between Large and Small Language Models
Artificial Intelligence
Makes smart AI work on phones and less powerful devices.
CE-LSLM: Efficient Large-Small Language Model Inference and Communication via Cloud-Edge Collaboration
Networking and Internet Architecture
Lets phones do smart AI tasks without slow internet.
Edge-First Language Model Inference: Models, Metrics, and Tradeoffs
Distributed, Parallel, and Cluster Computing
Lets small AI models run on phones.