SLIDE: Simultaneous Model Downloading and Inference at the Wireless Network Edge
By: Guanqiao Qu , Tao Li , Qian Chen and more
To support on-device inference, the next-generation mobile networks are expected to support real-time model downloading services to mobile users. However, powerful AI models typically have large model sizes, resulting in excessive end-to-end (E2E) downloading-and-inference (DAI) latency. To address this issue, we propose a simultaneous model downloading and inference (SLIDE) framework, which allows users to perform inference with downloaded layers while simultaneously receiving the remaining layers of the model. To this end, we formulate a task throughput maximization problem by jointly optimizing model provisioning, spectrum bandwidth allocation, and computing resource allocation for multi-user downlink systems. Unlike traditional DAI frameworks, SLIDE introduces recursive dependencies across layers, where inference latency depends recursively on the downloading bandwidth and computing resource allocation for each of the preceding layers. To solve this challenging problem, we design an efficient algorithm that acquires the optimal solution with polynomial-time complexity. Simulation results demonstrate that the proposed SLIDE framework significantly improves task throughput under latency and communication resource constraints compared with the conventional model downloading schemes.
Similar Papers
End-Edge Model Collaboration: Bandwidth Allocation for Data Upload and Model Transmission
Emerging Technologies
Makes smart gadgets learn better with less internet.
Dynamic Quality-Latency Aware Routing for LLM Inference in Wireless Edge-Device Networks
Information Theory
Makes smart assistants answer faster and better.
Real-Time Inference for Distributed Multimodal Systems under Communication Delay Uncertainty
Machine Learning (CS)
Lets computers understand events with changing delays.