Modeling Tradeoffs between mobility, cost, and performance in Edge Computing
By: Muhammad Danish Waseem, Ahmed Ali-Eldin
Edge computing provides a cloud-like architecture where small-scale resources are distributed near the network edge, enabling applications on resource-constrained devices to offload latency-critical computations to these resources. While some recent work showed that the resource constraints of the edge could result in higher end-to-end latency under medium to high utilization due to higher queuing delays, to the best of our knowledge, there has not been any work on modeling the trade-offs of deploying on edge versus cloud infrastructures in the presence of mobility. Understanding the costs and trade-offs of this architecture is important for network designers, as the architecture is now adopted to be part of 5G and beyond networks in the form of the Multi-access Edge Computing (MEC). In this paper we focus on quantifying and estimating the cost of edge computing. Using closed-form queuing models, we explore the cost-performance trade-offs in the presence of different systems dynamics. We model how workload mobility and workload variations influence these trade- offs, and validate our results with realistic experiments and simulations. Finally, we discuss the practical implications for designing edge systems and developing algorithms for efficient resource and workload management.
Similar Papers
Energy-Efficient Task Computation at the Edge for Vehicular Services
Networking and Internet Architecture
Helps self-driving cars use less power.
Application-Aware Resource Allocation and Data Management for MEC-assisted IoT Service Providers
Networking and Internet Architecture
Helps smart devices share data faster and better.
Cost-Efficient Design for 5G-Enabled MEC Servers under Uncertain User Demands
Networking and Internet Architecture
Makes 5G phones faster by placing computers closer.