Federated Gaussian Mixture Models
By: Sophia Zhang Pettersson, Kuo-Yun Liang, Juan Carlos Andresen
Potential Business Impact:
Lets phones learn together without sharing secrets.
This paper introduces FedGenGMM, a novel one-shot federated learning approach for Gaussian Mixture Models (GMM) tailored for unsupervised learning scenarios. In federated learning (FL), where multiple decentralized clients collaboratively train models without sharing raw data, significant challenges include statistical heterogeneity, high communication costs, and privacy concerns. FedGenGMM addresses these issues by allowing local GMM models, trained independently on client devices, to be aggregated through a single communication round. This approach leverages the generative property of GMMs, enabling the creation of a synthetic dataset on the server side to train a global model efficiently. Evaluation across diverse datasets covering image, tabular, and time series data demonstrates that FedGenGMM consistently achieves performance comparable to non-federated and iterative federated methods, even under significant data heterogeneity. Additionally, FedGenGMM significantly reduces communication overhead, maintains robust performance in anomaly detection tasks, and offers flexibility in local model complexities, making it particularly suitable for edge computing environments.
Similar Papers
Federated Learning for Diffusion Models
Machine Learning (CS)
Makes AI learn better from scattered, different data.
Knowledge-Driven Federated Graph Learning on Model Heterogeneity
Machine Learning (CS)
Lets different computers learn together safely.
Rethinking Federated Graph Learning: A Data Condensation Perspective
Machine Learning (CS)
Shares knowledge from many computers safely.