AIVD: Adaptive Edge-Cloud Collaboration for Accurate and Efficient Industrial Visual Detection
By: Yunqing Hu , Zheming Yang , Chang Zhao and more
Potential Business Impact:
Smart AI sees better, uses less power.
Multimodal large language models (MLLMs) demonstrate exceptional capabilities in semantic understanding and visual reasoning, yet they still face challenges in precise object localization and resource-constrained edge-cloud deployment. To address this, this paper proposes the AIVD framework, which achieves unified precise localization and high-quality semantic generation through the collaboration between lightweight edge detectors and cloud-based MLLMs. To enhance the cloud MLLM's robustness against edge cropped-box noise and scenario variations, we design an efficient fine-tuning strategy with visual-semantic collaborative augmentation, significantly improving classification accuracy and semantic consistency. Furthermore, to maintain high throughput and low latency across heterogeneous edge devices and dynamic network conditions, we propose a heterogeneous resource-aware dynamic scheduling algorithm. Experimental results demonstrate that AIVD substantially reduces resource consumption while improving MLLM classification performance and semantic generation quality. The proposed scheduling strategy also achieves higher throughput and lower latency across diverse scenarios.
Similar Papers
Adaptive Guidance Semantically Enhanced via Multimodal LLM for Edge-Cloud Object Detection
CV and Pattern Recognition
Helps cameras see better in dark or crowded places.
Collaborative Edge-to-Server Inference for Vision-Language Models
CV and Pattern Recognition
Lets AI see details without sending big pictures.
Semantic Edge-Cloud Communication for Real-Time Urban Traffic Surveillance with ViT and LLMs over Mobile Networks
Networking and Internet Architecture
Makes city traffic cameras send less data.