A General Approach of Automated Environment Design for Learning the Optimal Power Flow
By: Thomas Wolgast, Astrid Nieße
Potential Business Impact:
Teaches computers to solve power grid problems better.
Reinforcement learning (RL) algorithms are increasingly used to solve the optimal power flow (OPF) problem. Yet, the question of how to design RL environments to maximize training performance remains unanswered, both for the OPF and the general case. We propose a general approach for automated RL environment design by utilizing multi-objective optimization. For that, we use the hyperparameter optimization (HPO) framework, which allows the reuse of existing HPO algorithms and methods. On five OPF benchmark problems, we demonstrate that our automated design approach consistently outperforms a manually created baseline environment design. Further, we use statistical analyses to determine which environment design decisions are especially important for performance, resulting in multiple novel insights on how RL-OPF environments should be designed. Finally, we discuss the risk of overfitting the environment to the utilized RL algorithm. To the best of our knowledge, this is the first general approach for automated RL environment design.
Similar Papers
Neural Network Optimal Power Flow via Energy Gradient Flow and Unified Dynamics
Machine Learning (CS)
Powers grids cheaper and faster using smart math.
Differentiable Optimization for Deep Learning-Enhanced DC Approximation of AC Optimal Power Flow
Optimization and Control
Makes power grids smarter and more efficient.
Optimizing Power Grid Topologies with Reinforcement Learning: A Survey of Methods and Challenges
Systems and Control
Helps power grids use renewable energy better.