Edge Large AI Models: Collaborative Deployment and IoT Applications
By: Zixin Wang, Yuanming Shi, Khaled. B. Letaief
Potential Business Impact:
Smart devices work together for faster AI.
Large artificial intelligence models (LAMs) emulate human-like problem-solving capabilities across diverse domains, modalities, and tasks. By leveraging the communication and computation resources of geographically distributed edge devices, edge LAMs enable real-time intelligent services at the network edge. Unlike conventional edge AI, which relies on small or moderate-sized models for direct feature-to-prediction mappings, edge LAMs leverage the intricate coordination of modular components to enable context-aware generative tasks and multi-modal inference. We shall propose a collaborative deployment framework for edge LAM by characterizing the LAM intelligent capabilities and limited edge network resources. Specifically, we propose a collaborative training framework over heterogeneous edge networks that adaptively decomposes LAMs according to computation resources, data modalities, and training objectives, reducing communication and computation overheads during the fine-tuning process. Furthermore, we introduce a microservice-based inference framework that virtualizes the functional modules of edge LAMs according to their architectural characteristics, thereby improving resource utilization and reducing inference latency. The developed edge LAM will provide actionable solutions to enable diversified Internet-of-Things (IoT) applications, facilitated by constructing mappings from diverse sensor data to token representations and fine-tuning based on domain knowledge.
Similar Papers
Edge Large AI Models: Revolutionizing 6G Networks
Networking and Internet Architecture
Smart phones will do many complex tasks.
Satellite Edge Artificial Intelligence with Large Models: Architectures and Technologies
Machine Learning (CS)
Satellites process data in space for faster alerts.
Smaller, Smarter, Closer: The Edge of Collaborative Generative AI
Distributed, Parallel, and Cluster Computing
Lets AI work faster and cheaper everywhere.