CSGO: Generalized Optimization for Cold Start in Wireless Collaborative Edge LLM Systems
By: Xuran Liu , Nan Xue , Rui Bao and more
Potential Business Impact:
Makes AI work faster on your phone.
While deploying large language models on edge devices promises low-latency and privacy-preserving AI services, it is hindered by limited device resources. Although pipeline parallelism facilitates distributed inference, existing approaches often ignore the cold-start latency caused by on-demand model loading. In this paper, we propose a latency-aware scheduling framework that overlaps model loading with computation and communication to minimize total inference latency. Based on device and model parameters, the framework dynamically adjusts layer partitioning and allocation to effectively hide loading time, thereby eliminating as many idle periods as possible. We formulate the problem as a Mixed-Integer Non-Linear Program and design an efficient dynamic programming algorithm to optimize model partitioning and device assignment. Experimental results show that the proposed method significantly reduces cold-start latency compared to baseline strategies.
Similar Papers
Dynamic Quality-Latency Aware Routing for LLM Inference in Wireless Edge-Device Networks
Information Theory
Makes smart assistants answer faster and better.
Optimal Multi-Constrained Workflow Scheduling for Cyber-Physical Systems in the Edge-Cloud Continuum
Networking and Internet Architecture
Makes smart devices work faster together.
Rethinking Inference Placement for Deep Learning across Edge and Cloud Platforms: A Multi-Objective Optimization Perspective and Future Directions
Distributed, Parallel, and Cluster Computing
Makes smart apps run faster and safer.