Graph-Enhanced Model-Free Reinforcement Learning Agents for Efficient Power Grid Topological Control
By: Eloy Anguiano Batanero, Ángela Fernández, Álvaro Barbero
Potential Business Impact:
Makes power grids smarter and more efficient.
The increasing complexity of power grid management, driven by the emergence of prosumers and the demand for cleaner energy solutions, has needed innovative approaches to ensure stability and efficiency. This paper presents a novel approach within the model-free framework of reinforcement learning, aimed at optimizing power network operations without prior expert knowledge. We introduce a masked topological action space, enabling agents to explore diverse strategies for cost reduction while maintaining reliable service using the state logic as a guide for choosing proper actions. Through extensive experimentation across 20 different scenarios in a simulated 5-substation environment, we demonstrate that our approach achieves a consistent reduction in power losses, while ensuring grid stability against potential blackouts. The results underscore the effectiveness of combining dynamic observation formalization with opponent-based training, showing a viable way for autonomous management solutions in modern energy systems or even for building a foundational model for this field.
Similar Papers
Power Grid Control with Graph-Based Distributed Reinforcement Learning
Machine Learning (CS)
Helps power grids run better with smart computers.
Optimizing Power Grid Topologies with Reinforcement Learning: A Survey of Methods and Challenges
Systems and Control
Helps power grids use renewable energy better.
Learning Topology Actions for Power Grid Control: A Graph-Based Soft-Label Imitation Learning Approach
Machine Learning (CS)
Helps power grids avoid blackouts using smart computer learning.