Score: 0

Elastic Mixture of Rank-Wise Experts for Knowledge Reuse in Federated Fine-Tuning

Published: November 30, 2025 | arXiv ID: 2512.00902v1

By: Yebo Wu , Jingguang Li , Zhijiang Guo and more

Potential Business Impact:

Reuses old AI knowledge to train new AI faster.

Business Areas:
Crowdsourcing Collaboration

Federated fine-tuning offers a promising solution for adapting Large Language Models (LLMs) to downstream tasks while safeguarding data privacy. However, its high computational and communication demands hinder its deployment on resource-constrained devices. In this paper, we propose SmartFed, a resource-efficient federated fine-tuning framework. SmartFed intelligently reuses knowledge embedded in existing LoRA modules, eliminating the need for expensive training from scratch when adapting LLMs to new tasks. To effectively exploit this knowledge and ensure scalability, we introduce the Mixture of Rank-Wise Experts (MoRE). MoRE decomposes LoRA modules into fine-grained rank-level experts. These experts are selectively activated and combined based on input semantics and resource budgets. Moreover, to optimize resource utilization, we present the Elastic Expert Quota Allocation (EEQA). EEQA adaptively allocates expert capacity across parameter matrices based on their contribution to model performance, focusing computing resources on the critical experts. Extensive evaluations across multiple benchmarks demonstrate that SmartFed significantly outperforms existing methods in model performance and training efficiency.

Country of Origin
🇲🇴 Macao

Page Count
16 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing