MoA-Off: Adaptive Heterogeneous Modality-Aware Offloading with Edge-Cloud Collaboration for Efficient Multimodal LLM Inference
By: Zheming Yang , Qi Guo , Yunqing Hu and more
Potential Business Impact:
Makes smart computer programs run faster on phones.
Multimodal large language models (MLLMs) enable powerful cross-modal inference but impose significant computational and latency burdens, posing severe challenges for deployment in resource-constrained environments. In this paper, we propose MoA-Off, an adaptive heterogeneous modality-aware offloading framework with edge-cloud collaboration for efficient MLLM inference. MoA-Off introduces a lightweight heterogeneous modality-aware module that estimates the complexity of heterogeneous inputs through multi-dimensional feature analysis. Then, an adaptive edge-cloud collaborative offloading strategy is proposed that dynamically schedules workloads between edge and cloud based on modality-aware complexity scores and real-time system states. The experimental results demonstrate that MoA-Off can achieve over 30% reduction in latency and 30%-65% decrease in resource overhead while maintaining competitive accuracy compared to traditional approaches.
Similar Papers
CoMoE: Collaborative Optimization of Expert Aggregation and Offloading for MoE-based LLMs at Edge
Networking and Internet Architecture
Makes big AI models fit on phones.
OD-MoE: On-Demand Expert Loading for Cacheless Edge-Distributed MoE Inference
Distributed, Parallel, and Cluster Computing
Lets small computers run big AI models.
Adaptive Guidance Semantically Enhanced via Multimodal LLM for Edge-Cloud Object Detection
CV and Pattern Recognition
Helps cameras see better in dark or crowded places.