Score: 0

Reinforcement Learning for Resource Allocation in Vehicular Multi-Fog Computing

Published: October 31, 2025 | arXiv ID: 2511.00276v1

By: Mohammad Hadi Akbarzadeh , Mahmood Ahmadi , Mohammad Saeed Jahangiry and more

Potential Business Impact:

Smart cars get faster internet by sharing tasks.

Business Areas:
Cloud Computing Internet Services, Software

The exponential growth of Internet of Things (IoT) devices, smart vehicles, and latency-sensitive applications has created an urgent demand for efficient distributed computing paradigms. Multi-Fog Computing (MFC), as an extension of fog and edge computing, deploys multiple fog nodes near end users to reduce latency, enhance scalability, and ensure Quality of Service (QoS). However, resource allocation in MFC environments is highly challenging due to dynamic vehicular mobility, heterogeneous resources, and fluctuating workloads. Traditional optimization-based methods often fail to adapt to such dynamics. Reinforcement Learning (RL), as a model-free decision-making framework, enables adaptive task allocation by continuously interacting with the environment. This paper formulates the resource allocation problem in MFC as a Markov Decision Process (MDP) and investigates the application of RL algorithms such as Q-learning, Deep Q-Networks (DQN), and Actor-Critic. We present experimental results demonstrating improvements in latency, workload balance, and task success rate. The contributions and novelty of this study are also discussed, highlighting the role of RL in addressing emerging vehicular computing challenges.

Country of Origin
🇮🇷 Iran

Page Count
6 pages

Category
Computer Science:
Networking and Internet Architecture