DAG-AFL:Directed Acyclic Graph-based Asynchronous Federated Learning
By: Shuaipeng Zhang , Lanju Kong , Yixin Zhang and more
Potential Business Impact:
Makes learning faster and better for many computers.
Due to the distributed nature of federated learning (FL), the vulnerability of the global model and the need for coordination among many client devices pose significant challenges. As a promising decentralized, scalable and secure solution, blockchain-based FL methods have attracted widespread attention in recent years. However, traditional consensus mechanisms designed for Proof of Work (PoW) similar to blockchain incur substantial resource consumption and compromise the efficiency of FL, particularly when participating devices are wireless and resource-limited. To address asynchronous client participation and data heterogeneity in FL, while limiting the additional resource overhead introduced by blockchain, we propose the Directed Acyclic Graph-based Asynchronous Federated Learning (DAG-AFL) framework. We develop a tip selection algorithm that considers temporal freshness, node reachability and model accuracy, with a DAG-based trusted verification strategy. Extensive experiments on 3 benchmarking datasets against eight state-of-the-art approaches demonstrate that DAG-AFL significantly improves training efficiency and model accuracy by 22.7% and 6.5% on average, respectively.
Similar Papers
Fault-Tolerant Decentralized Distributed Asynchronous Federated Learning with Adaptive Termination Detection
Distributed, Parallel, and Cluster Computing
Lets computers learn together without sharing private data.
Evaluation Framework for Centralized and Decentralized Aggregation Algorithm in Federated Systems
Distributed, Parallel, and Cluster Computing
Trains computers together without sharing private info.
Blockchain-Enabled Federated Learning
Distributed, Parallel, and Cluster Computing
Lets computers learn together safely, without sharing secrets.