Federated Learning within Global Energy Budget over Heterogeneous Edge Accelerators
By: Roopkatha Banerjee , Tejus Chandrashekar , Ananth Eswar and more
Potential Business Impact:
Trains AI smarter with less energy.
Federated Learning (FL) enables collaborative model training across distributed clients while preserving data privacy. However, optimizing both energy efficiency and model accuracy remains a challenge, given device and data heterogeneity. Further, sustainable AI through a global energy budget for FL has not been explored. We propose a novel optimization problem for client selection in FL that maximizes the model accuracy within an overall energy limit and reduces training time. We solve this with a unique bi-level ILP formulation that leverages approximate Shapley values and energy-time prediction models to efficiently solve this. Our FedJoule framework achieves superior training accuracies compared to SOTA and simple baselines for diverse energy budgets, non-IID distributions, and realistic experiment configurations, performing 15% and 48% better on accuracy and time, respectively. The results highlight the effectiveness of our method in achieving a viable trade-off between energy usage and performance in FL environments.
Similar Papers
FairEnergy: Contribution-Based Fairness meets Energy Efficiency in Federated Learning
Machine Learning (CS)
Saves phone battery while learning together.
Optimizing Federated Learning for Scalable Power-demand Forecasting in Microgrids
Distributed, Parallel, and Cluster Computing
Predicts power use without sharing private data.
Resource Utilization Optimized Federated Learning
Distributed, Parallel, and Cluster Computing
Makes smart computer learning faster and better.