Federated Learning with Feedback Alignment
By: Incheol Baek , Hyungbin Kim , Minseo Kim and more
Potential Business Impact:
Helps computers learn together without sharing private data.
Federated Learning (FL) enables collaborative training across multiple clients while preserving data privacy, yet it struggles with data heterogeneity, where clients' data are not distributed independently and identically (non-IID). This causes local drift, hindering global model convergence. To address this, we introduce Federated Learning with Feedback Alignment (FLFA), a novel framework that integrates feedback alignment into FL. FLFA uses the global model's weights as a shared feedback matrix during local training's backward pass, aligning local updates with the global model efficiently. This approach mitigates local drift with minimal additional computational cost and no extra communication overhead. Our theoretical analysis supports FLFA's design by showing how it alleviates local drift and demonstrates robust convergence for both local and global models. Empirical evaluations, including accuracy comparisons and measurements of local drift, further illustrate that FLFA can enhance other FL methods demonstrating its effectiveness.
Similar Papers
Optimization Methods and Software for Federated Learning
Machine Learning (CS)
Helps many phones learn together safely.
Federated Learning: A Survey on Privacy-Preserving Collaborative Intelligence
Machine Learning (CS)
Trains computers together without sharing private info.
FedPPA: Progressive Parameter Alignment for Personalized Federated Learning
Machine Learning (CS)
Helps computers learn from everyone's private info.