Score: 0

ADF-LoRA: Alternating Low-Rank Aggregation for Decentralized Federated Fine-Tuning

Published: November 23, 2025 | arXiv ID: 2511.18291v1

By: Xiaoyu Wang , Xiaotian Li , Zhixiang Zhou and more

Potential Business Impact:

Makes computer learning work better without a central boss.

Business Areas:
A/B Testing Data and Analytics

This paper revisits alternating low-rank updates for federated fine-tuning and examines their behavior in decentralized federated learning (DFL). While alternating the LoRA matrices has been shown to stabilize aggregation in centralized FL, extending this mechanism to decentralized, peer-to-peer communication introduces new challenges due to phase-state mismatch and block-wise divergence across clients. We introduce ADF-LoRA, which synchronizes the update of only one low-rank matrix per round and mixes both matrices to maintain more consistent parameter states under decentralized propagation. This design preserves the cross-term suppression effect of alternating updates while improving stability in serverless topologies. We provide a convergence analysis under standard smoothness assumptions and evaluate ADF-LoRA on multiple GLUE tasks. Experiments show that ADF-LoRA achieves faster and smoother convergence and delivers the highest average accuracy across tasks, outperforming existing LoRA variants in decentralized FL by a consistent margin.

Country of Origin
🇺🇸 United States

Page Count
10 pages

Category
Computer Science:
Machine Learning (CS)