DFPL: Decentralized Federated Prototype Learning Across Heterogeneous Data Distributions
By: Hongliang Zhang , Fenghua Xu , Zhongyuan Yu and more
Potential Business Impact:
Helps computers learn together without sharing private data.
Federated learning is a distributed machine learning paradigm through centralized model aggregation. However, standard federated learning relies on a centralized server, making it vulnerable to server failures. While existing solutions utilize blockchain technology to implement Decentralized Federated Learning (DFL), the statistical heterogeneity of data distributions among clients severely degrades the performance of DFL. Driven by this issue, this paper proposes a decentralized federated prototype learning framework, named DFPL, which significantly improves the performance of DFL across heterogeneous data distributions. Specifically, DFPL introduces prototype learning into DFL to mitigate the impact of statistical heterogeneity and reduces the amount of parameters exchanged between clients. Additionally, blockchain is embedded into our framework, enabling the training and mining processes to be implemented locally on each client. From a theoretical perspective, we analyze the convergence of DFPL by modeling the required computational resources during both training and mining processes. The experiment results highlight the superiority of our DFPL in model performance and communication efficiency across four benchmark datasets with heterogeneous data distributions.
Similar Papers
From Centralized to Decentralized Federated Learning: Theoretical Insights, Privacy Preservation, and Robustness Challenges
Machine Learning (CS)
Helps computers learn together without sharing secrets.
UnifyFL: Enabling Decentralized Cross-Silo Federated Learning
Distributed, Parallel, and Cluster Computing
Lets groups train AI together without sharing private data.
Performance Analysis of Decentralized Federated Learning Deployments
Machine Learning (CS)
Helps phones learn together without a boss.