End-Edge Model Collaboration: Bandwidth Allocation for Data Upload and Model Transmission
By: Dailin Yang , Shuhang Zhang , Hongliang Zhang and more
Potential Business Impact:
Makes smart gadgets learn better with less internet.
The widespread adoption of large artificial intelligence (AI) models has enabled numerous applications of the Internet of Things (IoT). However, large AI models require substantial computational and memory resources, which exceed the capabilities of resource-constrained IoT devices. End-edge collaboration paradigm is developed to address this issue, where a small model on the end device performs inference tasks, while a large model on the edge server assists with model updates. To improve the accuracy of the inference tasks, the data generated on the end devices will be periodically uploaded to edge server to update model, and a distilled model of the updated one will be transmitted back to the end device. Subjected to the limited bandwidth for the communication link between the end device and the edge server, it is important to investigate whether the system should allocate more bandwidth to data upload or to model transmission. In this paper, we characterize the impact of data upload and model transmission on inference accuracy. Subsequently, we formulate a bandwidth allocation problem. By solving this problem, we derive an efficient optimization framework for the end-edge collaboration system. The simulation results demonstrate our framework significantly enhances mean average precision (mAP) under various bandwidths and datasizes.
Similar Papers
Bandwidth Allocation for Cloud-Augmented Autonomous Driving
Robotics
Cars use cloud power for smarter driving.
Edge-Based Predictive Data Reduction for Smart Agriculture: A Lightweight Approach to Efficient IoT Communication
Machine Learning (CS)
Saves battery by sending less sensor data.
The Larger the Merrier? Efficient Large AI Model Inference in Wireless Edge Networks
Machine Learning (CS)
Makes smart computer programs run faster on phones.