Multi-Agent Reinforcement Learning for Task Offloading in Wireless Edge Networks
By: Andrea Fox, Francesco De Pellegrini, Eitan Altman
Potential Business Impact:
Helps robots share resources without talking much.
In edge computing systems, autonomous agents must make fast local decisions while competing for shared resources. Existing MARL methods often resume to centralized critics or frequent communication, which fail under limited observability and communication constraints. We propose a decentralized framework in which each agent solves a constrained Markov decision process (CMDP), coordinating implicitly through a shared constraint vector. For the specific case of offloading, e.g., constraints prevent overloading shared server resources. Coordination constraints are updated infrequently and act as a lightweight coordination mechanism. They enable agents to align with global resource usage objectives but require little direct communication. Using safe reinforcement learning, agents learn policies that meet both local and global goals. We establish theoretical guarantees under mild assumptions and validate our approach experimentally, showing improved performance over centralized and independent baselines, especially in large-scale settings.
Similar Papers
Federated Multi-Agent Reinforcement Learning for Privacy-Preserving and Energy-Aware Resource Management in 6G Edge Networks
Machine Learning (CS)
Makes phones work faster and save battery.
Goal-Oriented Multi-Agent Reinforcement Learning for Decentralized Agent Teams
Multiagent Systems
Helps self-driving vehicles work together better.
Consensus-based Decentralized Multi-agent Reinforcement Learning for Random Access Network Optimization
Networking and Internet Architecture
Helps many devices share internet without crashing.