Score: 0

Federated Fine-Tuning of Sparsely-Activated Large Language Models on Resource-Constrained Devices

Published: August 26, 2025 | arXiv ID: 2508.19078v1

By: Fahao Chen , Jie Wan , Peng Li and more

Potential Business Impact:

Makes smart computer brains learn faster on weak computers.

Business Areas:
Quantum Computing Science and Engineering

Federated fine-tuning of Mixture-of-Experts (MoE)-based large language models (LLMs) is challenging due to their massive computational requirements and the resource constraints of participants. Existing working attempts to fill this gap through model quantization, computation offloading, or expert pruning. However, they cannot achieve desired performance due to impractical system assumptions and a lack of consideration for MoE-specific characteristics. In this paper, we propose FLUX, a system designed to enable federated fine-tuning of MoE-based LLMs across participants with constrained computing resources (e.g., consumer-grade GPUs), aiming to minimize time-to-accuracy. FLUX introduces three key innovations: (1) quantization-based local profiling to estimate expert activation with minimal overhead, (2) adaptive layer-aware expert merging to reduce resource consumption while preserving accuracy, and (3) dynamic expert role assignment using an exploration-exploitation strategy to balance tuning and non-tuning experts. Extensive experiments on LLaMA-MoE and DeepSeek-MoE with multiple benchmark datasets demonstrate that FLUX significantly outperforms existing methods, achieving up to 4.75X speedup in time-to-accuracy.

Country of Origin
🇨🇳 China

Page Count
15 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing