EPARA: Parallelizing Categorized AI Inference in Edge Clouds
By: Yubo Wang , Yubo Cui , Tuo Shi and more
Potential Business Impact:
Makes AI on phones and computers run much faster.
With the increasing adoption of AI applications such as large language models and computer vision AI, the computational demands on AI inference systems are continuously rising, making the enhancement of task processing capacity using existing hardware a primary objective in edge clouds. We propose EPARA, an end-to-end AI parallel inference framework in edge, aimed at enhancing the edge AI serving capability. Our key idea is to categorize tasks based on their sensitivity to latency/frequency and requirement for GPU resources, thereby achieving both request-level and service-level task-resource allocation. EPARA consists of three core components: 1) a task-categorized parallelism allocator that decides the parallel mode of each task, 2) a distributed request handler that performs the calculation for the specific request, and 3) a state-aware scheduler that periodically updates service placement in edge clouds. We implement a EPARA prototype and conduct a case study on the EPARA operation for LLMs and segmentation tasks. Evaluation through testbed experiments involving edge servers, embedded devices, and microcomputers shows that EPARA achieves up to 2.1$\times$ higher goodput in production workloads compared to prior frameworks, while adapting to various edge AI inference tasks.
Similar Papers
Dora: QoE-Aware Hybrid Parallelism for Distributed Edge AI
Distributed, Parallel, and Cluster Computing
Makes AI apps faster and use less power.
Accurate Performance Predictors for Edge Computing Applications
Distributed, Parallel, and Cluster Computing
Helps computers guess how fast apps will run.
Dynamic Pricing for On-Demand DNN Inference in the Edge-AI Market
Artificial Intelligence
Smarter AI on phones, faster and cheaper.