ILoRA: Federated Learning with Low-Rank Adaptation for Heterogeneous Client Aggregation
By: Junchao Zhou, Junkang Liu, Fanhua Shang
Potential Business Impact:
Fixes AI learning when data is different.
Federated Learning with Low-Rank Adaptation (LoRA) faces three critical challenges under client heterogeneity: (1) Initialization-Induced Instability due to random initialization misaligning client subspaces; (2) Rank Incompatibility and Aggregation Error when averaging LoRA parameters of different ranks, which biases the global model; and (3) exacerbated Client Drift under Non-IID Data, impairing generalization. To address these challenges, we propose ILoRA, a unified framework that integrates three core innovations: a QR-based orthonormal initialization to ensure all clients start in a coherent subspace; a Concatenated QR Aggregation mechanism that fuses heterogeneous-rank updates via concatenation and decomposition, preserving information while maintaining dimension alignment; and an AdamW optimizer with rank-aware control variates to correct local updates and mitigate client drift. Supported by theoretical convergence guarantees, extensive experiments on vision and NLP benchmarks demonstrate that ILoRA consistently achieves superior accuracy and convergence stability compared to existing federated LoRA methods.
Similar Papers
ADF-LoRA: Alternating Low-Rank Aggregation for Decentralized Federated Fine-Tuning
Machine Learning (CS)
Makes computer learning work better without a central boss.
HLoRA: Efficient Federated Learning System for LLM Heterogeneous Fine-Tuning
Distributed, Parallel, and Cluster Computing
Teaches AI new things without seeing private data.
Adaptive LoRA Experts Allocation and Selection for Federated Fine-Tuning
Machine Learning (CS)
Helps AI learn from private data without sharing.