Feature-Based Semantics-Aware Scheduling for Energy-Harvesting Federated Learning
By: Eunjeong Jeong , Giovanni Perin , Howard H. Yang and more
Potential Business Impact:
Smartly trains AI on phones, saving energy.
Federated Learning (FL) on resource-constrained edge devices faces a critical challenge: The computational energy required for training Deep Neural Networks (DNNs) often dominates communication costs. However, most existing Energy-Harvesting FL (EHFL) strategies fail to account for this reality, resulting in wasted energy due to redundant local computations. For efficient and proactive resource management, algorithms that predict local update contributions must be devised. We propose a lightweight client scheduling framework using the Version Age of Information (VAoI), a semantics-aware metric that quantifies update timeliness and significance. Crucially, we overcome VAoI's typical prohibitive computational cost, which requires statistical distance over the entire parameter space, by introducing a feature-based proxy. This proxy estimates model redundancy using intermediate-layer extraction from a single forward pass, dramatically reducing computational complexity. Experiments conducted under extreme non-IID data distributions and scarce energy availability demonstrate superior learning performance while achieving energy reduction compared to existing baseline selection policies. Our framework establishes semantics-aware scheduling as a practical and vital solution for EHFL in realistic scenarios where training costs dominate transmission costs.
Similar Papers
Computation-aware Energy-harvesting Federated Learning: Cyclic Scheduling with Selective Participation
Machine Learning (CS)
Saves phone battery by smarter training.
Integrated user scheduling and beam steering in over-the-air federated learning for mobile IoT
Distributed, Parallel, and Cluster Computing
Helps phones learn without sharing private data.
Federated Learning within Global Energy Budget over Heterogeneous Edge Accelerators
Distributed, Parallel, and Cluster Computing
Trains AI smarter with less energy.