Scaling Decentralized Learning with FLock
By: Zehua Cheng , Rui Sun , Jiahao Sun and more
Potential Business Impact:
Makes AI learn together safely, without one boss.
Fine-tuning the large language models (LLMs) are prevented by the deficiency of centralized control and the massive computing and communication overhead on the decentralized schemes. While the typical standard federated learning (FL) supports data privacy, the central server requirement creates a single point of attack and vulnerability to poisoning attacks. Generalizing the result in this direction to 70B-parameter models in the heterogeneous, trustless environments has turned out to be a huge, yet unbroken bottleneck. This paper introduces FLock, a decentralized framework for secure and efficient collaborative LLM fine-tuning. Integrating a blockchain-based trust layer with economic incentives, FLock replaces the central aggregator with a secure, auditable protocol for cooperation among untrusted parties. We present the first empirical validation of fine-tuning a 70B LLM in a secure, multi-domain, decentralized setting. Our experiments show the FLock framework defends against backdoor poisoning attacks that compromise standard FL optimizers and fosters synergistic knowledge transfer. The resulting models show a >68% reduction in adversarial attack success rates. The global model also demonstrates superior cross-domain generalization, outperforming models trained in isolation on their own specialized data.
Similar Papers
Can Federated Learning Safeguard Private Data in LLM Training? Vulnerabilities, Attacks, and Defense Evaluation
Machine Learning (CS)
Steals private info from shared AI training.
A Survey on Federated Fine-tuning of Large Language Models
Machine Learning (CS)
Teaches computers to learn together, keeping secrets safe.
Flow of Knowledge: Federated Fine-Tuning of LLMs in Healthcare under Non-IID Conditions
Computational Engineering, Finance, and Science
Doctors share AI knowledge without sharing patient secrets.