Federated Fine-Tuning of Sparsely-Activated Large Language Models on Resource-Constrained Devices
By: Fahao Chen , Jie Wan , Peng Li and more
Potential Business Impact:
Makes smart computer brains learn faster on weak computers.
Federated fine-tuning of Mixture-of-Experts (MoE)-based large language models (LLMs) is challenging due to their massive computational requirements and the resource constraints of participants. Existing working attempts to fill this gap through model quantization, computation offloading, or expert pruning. However, they cannot achieve desired performance due to impractical system assumptions and a lack of consideration for MoE-specific characteristics. In this paper, we propose FLUX, a system designed to enable federated fine-tuning of MoE-based LLMs across participants with constrained computing resources (e.g., consumer-grade GPUs), aiming to minimize time-to-accuracy. FLUX introduces three key innovations: (1) quantization-based local profiling to estimate expert activation with minimal overhead, (2) adaptive layer-aware expert merging to reduce resource consumption while preserving accuracy, and (3) dynamic expert role assignment using an exploration-exploitation strategy to balance tuning and non-tuning experts. Extensive experiments on LLaMA-MoE and DeepSeek-MoE with multiple benchmark datasets demonstrate that FLUX significantly outperforms existing methods, achieving up to 4.75X speedup in time-to-accuracy.
Similar Papers
Unlocking Personalized Knowledge in Federated Large Language Model: The Power of Mixture of Experts
Artificial Intelligence
Helps AI learn from many people without sharing private data.
FFT-MoE: Efficient Federated Fine-Tuning for Foundation Models via Large-scale Sparse MoE under Heterogeneous Edge
Machine Learning (CS)
Teaches AI to learn from many computers without sharing secrets.
Elastic Mixture of Rank-Wise Experts for Knowledge Reuse in Federated Fine-Tuning
Distributed, Parallel, and Cluster Computing
Reuses old AI knowledge to train new AI faster.