Optimizing Federated Learning for Scalable Power-demand Forecasting in Microgrids
By: Roopkatha Banerjee , Sampath Koti , Gyanendra Singh and more
Potential Business Impact:
Predicts power use without sharing private data.
Real-time monitoring of power consumption in cities and micro-grids through the Internet of Things (IoT) can help forecast future demand and optimize grid operations. But moving all consumer-level usage data to the cloud for predictions and analysis at fine time scales can expose activity patterns. Federated Learning~(FL) is a privacy-sensitive collaborative DNN training approach that retains data on edge devices, trains the models on private data locally, and aggregates the local models in the cloud. But key challenges exist: (i) clients can have non-independently identically distributed~(non-IID) data, and (ii) the learning should be computationally cheap while scaling to 1000s of (unseen) clients. In this paper, we develop and evaluate several optimizations to FL training across edge and cloud for time-series demand forecasting in micro-grids and city-scale utilities using DNNs to achieve a high prediction accuracy while minimizing the training cost. We showcase the benefit of using exponentially weighted loss while training and show that it further improves the prediction of the final model. Finally, we evaluate these strategies by validating over 1000s of clients for three states in the US from the OpenEIA corpus, and performing FL both in a pseudo-distributed setting and a Pi edge cluster. The results highlight the benefits of the proposed methods over baselines like ARIMA and DNNs trained for individual consumers, which are not scalable.
Similar Papers
Optimizing Federated Learning for Scalable Power-demand Forecasting in Microgrids
Distributed, Parallel, and Cluster Computing
Learns energy use without sharing private data.
Federated Learning: A Survey on Privacy-Preserving Collaborative Intelligence
Machine Learning (CS)
Trains computers together without sharing private info.
Federated Learning Framework for Scalable AI in Heterogeneous HPC and Cloud Environments
Distributed, Parallel, and Cluster Computing
Trains AI on many computers without sharing private data.