A Conflict-Aware Resource Management Framework for the Computing Continuum
By: Vlad Popescu-Vifor , Ilir Murturi , Praveen Kumar Donta and more
The increasing device heterogeneity and decentralization requirements in the computing continuum (i.e., spanning edge, fog, and cloud) introduce new challenges in resource orchestration. In such environments, agents are often responsible for optimizing resource usage across deployed services. However, agent decisions can lead to persistent conflict loops, inefficient resource utilization, and degraded service performance. To overcome such challenges, we propose a novel framework for adaptive conflict resolution in resource-oriented orchestration using a Deep Reinforcement Learning (DRL) approach. The framework enables handling resource conflicts across deployments and integrates a DRL model trained to mediate such conflicts based on real-time performance feedback and historical state information. The framework has been prototyped and validated on a Kubernetes-based testbed, illustrating its methodological feasibility and architectural resilience. Preliminary results show that the framework achieves efficient resource reallocation and adaptive learning in dynamic scenarios, thus providing a scalable and resilient solution for conflict-aware orchestration in the computing continuum.
Similar Papers
Multi-Agent Reinforcement Learning for Adaptive Resource Orchestration in Cloud-Native Clusters
Machine Learning (CS)
Makes computer databases run faster and smoother.
Distributed Resource Selection for Self-Organising Cloud-Edge Systems
Distributed, Parallel, and Cluster Computing
Lets computers share work faster, everywhere.
(DEMO) Deep Reinforcement Learning Based Resource Allocation in Distributed IoT Systems
Machine Learning (CS)
Helps smart devices share information better.