Scaling up Stability: Reinforcement Learning for Distributed Control of Networked Systems in the Space of Stabilizing Policies
By: John Cao, Luca Furieri
We study distributed control of networked systems through reinforcement learning, where neural policies must be simultaneously scalable, expressive and stabilizing. We introduce a policy parameterization that embeds Graph Neural Networks (GNNs) into a Youla-like magnitude-direction parameterization, yielding distributed stochastic controllers that guarantee network-level closed-loop stability by design. The magnitude is implemented as a stable operator consisting of a GNN acting on disturbance feedback, while the direction is a GNN acting on local observations. We prove robustness of the closed loop to perturbations in both the graph topology and model parameters, and show how to integrate our parameterization with Proximal Policy Optimization. Experiments on a multi-agent navigation task show that policies trained on small networks transfer directly to larger ones and unseen network topologies, achieve higher returns and lower variance than a state-of-the-art MARL baseline while preserving stability.
Similar Papers
Power Grid Control with Graph-Based Distributed Reinforcement Learning
Machine Learning (CS)
Helps power grids run better with smart computers.
Learning stabilising policies for constrained nonlinear systems
Systems and Control
Makes robots learn new tasks without mistakes.
Lyapunov-Based Graph Neural Networks for Adaptive Control of Multi-Agent Systems
Systems and Control
Helps robots follow moving targets better.