Score: 1

Joint Optimization of DNN Model Caching and Request Routing in Mobile Edge Computing

Published: November 5, 2025 | arXiv ID: 2511.03159v1

By: Shuting Qiu , Fang Dong , Siyu Tan and more

Potential Business Impact:

Makes phone apps faster by splitting up smart programs.

Business Areas:
Content Delivery Network Content and Publishing

Mobile edge computing (MEC) can pre-cache deep neural networks (DNNs) near end-users, providing low-latency services and improving users' quality of experience (QoE). However, caching all DNN models at edge servers with limited capacity is difficult, and the impact of model loading time on QoE remains underexplored. Hence, we introduce dynamic DNNs in edge scenarios, disassembling a complete DNN model into interrelated submodels for more fine-grained and flexible model caching and request routing solutions. This raises the pressing issue of jointly deciding request routing and submodel caching for dynamic DNNs to balance model inference precision and loading latency for QoE optimization. In this paper, we study the joint dynamic model caching and request routing problem in MEC networks, aiming to maximize user request inference precision under constraints of server resources, latency, and model loading time. To tackle this problem, we propose CoCaR, an offline algorithm based on linear programming and random rounding that leverages dynamic DNNs to optimize caching and routing schemes, achieving near-optimal performance. Furthermore, we develop an online variant of CoCaR, named CoCaR-OL, enabling effective adaptation to dynamic and unpredictable online request patterns. The simulation results demonstrate that the proposed CoCaR improves the average inference precision of user requests by 46\% compared to state-of-the-art baselines. In addition, in online scenarios, CoCaR-OL achieves an improvement of no less than 32.3\% in user QoE over competitive baselines.

Country of Origin
🇨🇳 China

Page Count
16 pages

Category
Computer Science:
Networking and Internet Architecture