Conditioning on Local Statistics for Scalable Heterogeneous Federated Learning
By: Rickard Brännvall
Potential Business Impact:
Helps AI learn from private data without sharing.
Federated learning is a distributed machine learning approach where multiple clients collaboratively train a model without sharing their local data, which contributes to preserving privacy. A challenge in federated learning is managing heterogeneous data distributions across clients, which can hinder model convergence and performance due to the need for the global model to generalize well across diverse local datasets. We propose to use local characteristic statistics, by which we mean some statistical properties calculated independently by each client using only their local training dataset. These statistics, such as means, covariances, and higher moments, are used to capture the characteristics of the local data distribution. They are not shared with other clients or a central node. During training, these local statistics help the model learn how to condition on the local data distribution, and during inference, they guide the client's predictions. Our experiments show that this approach allows for efficient handling of heterogeneous data across the federation, has favorable scaling compared to approaches that directly try to identify peer nodes that share distribution characteristics, and maintains privacy as no additional information needs to be communicated.
Similar Papers
Federated Learning on Stochastic Neural Networks
Machine Learning (CS)
Cleans up messy data for smarter AI.
Client Selection in Federated Learning with Data Heterogeneity and Network Latencies
Machine Learning (CS)
Makes smart computers learn faster from different data.
An Adaptive Clustering Scheme for Client Selections in Communication-Efficient Federated Learning
Machine Learning (CS)
Smartly groups users to train computers faster.