Score: 1

Federated Learning within Global Energy Budget over Heterogeneous Edge Accelerators

Published: June 12, 2025 | arXiv ID: 2506.10413v1

By: Roopkatha Banerjee , Tejus Chandrashekar , Ananth Eswar and more

Potential Business Impact:

Trains AI smarter with less energy.

Business Areas:
Energy Efficiency Energy, Sustainability

Federated Learning (FL) enables collaborative model training across distributed clients while preserving data privacy. However, optimizing both energy efficiency and model accuracy remains a challenge, given device and data heterogeneity. Further, sustainable AI through a global energy budget for FL has not been explored. We propose a novel optimization problem for client selection in FL that maximizes the model accuracy within an overall energy limit and reduces training time. We solve this with a unique bi-level ILP formulation that leverages approximate Shapley values and energy-time prediction models to efficiently solve this. Our FedJoule framework achieves superior training accuracies compared to SOTA and simple baselines for diverse energy budgets, non-IID distributions, and realistic experiment configurations, performing 15% and 48% better on accuracy and time, respectively. The results highlight the effectiveness of our method in achieving a viable trade-off between energy usage and performance in FL environments.

Country of Origin
🇮🇳 India

Page Count
18 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing