ADF-LoRA: Alternating Low-Rank Aggregation for Decentralized Federated Fine-Tuning
By: Xiaoyu Wang , Xiaotian Li , Zhixiang Zhou and more
Potential Business Impact:
Makes computer learning work better without a central boss.
This paper revisits alternating low-rank updates for federated fine-tuning and examines their behavior in decentralized federated learning (DFL). While alternating the LoRA matrices has been shown to stabilize aggregation in centralized FL, extending this mechanism to decentralized, peer-to-peer communication introduces new challenges due to phase-state mismatch and block-wise divergence across clients. We introduce ADF-LoRA, which synchronizes the update of only one low-rank matrix per round and mixes both matrices to maintain more consistent parameter states under decentralized propagation. This design preserves the cross-term suppression effect of alternating updates while improving stability in serverless topologies. We provide a convergence analysis under standard smoothness assumptions and evaluate ADF-LoRA on multiple GLUE tasks. Experiments show that ADF-LoRA achieves faster and smoother convergence and delivers the highest average accuracy across tasks, outperforming existing LoRA variants in decentralized FL by a consistent margin.
Similar Papers
ILoRA: Federated Learning with Low-Rank Adaptation for Heterogeneous Client Aggregation
Machine Learning (CS)
Fixes AI learning when data is different.
FedLoRA-Optimizer: Federated LoRA Fine-Tuning with Global and Local Optimization in Heterogeneous Data Scenarios
Machine Learning (CS)
Improves AI learning from many different users.
Communication-Efficient Wireless Federated Fine-Tuning for Large-Scale AI Models
Machine Learning (CS)
Trains big computer brains with less data sent.