Optimized Split Computing Framework for Edge and Core Devices
By: Andrea Tassi , Oluwatayo Yetunde Kolawole , Joan Pujol Roig and more
Potential Business Impact:
Lets phones run smart programs using less power.
With mobile networks expected to support services with stringent requirements that ensure high-quality user experience, the ability to apply Feed-Forward Neural Network (FFNN) models to User Equipment (UE) use cases has become critical. Given that UEs have limited resources, running FFNNs directly on UEs is an intrinsically challenging problem. This letter proposes an optimization framework for split computing applications where an FFNN model is partitioned into multiple sections, and executed by UEs, edge- and core-located nodes to reduce the required UE computational footprint while containing the inference time. An efficient heuristic strategy for solving the optimization problem is also provided. The proposed framework is shown to be robust in heterogeneous settings, eliminating the need for retraining and reducing the UE's memory (CPU) footprint by over 33.6% (60%).
Similar Papers
Adaptive AI Model Partitioning over 5G Networks
Networking and Internet Architecture
Lets phones run smart apps without draining battery.
Optimizing Energy and Latency in 6G Smart Cities with Edge CyberTwins
Networking and Internet Architecture
Makes smart city internet faster and use less power.
Optimizing Energy and Latency in 6G Smart Cities with Edge CyberTwins
Networking and Internet Architecture
Makes smart city gadgets use less power.