Score: 0

Memory-Efficient Split Federated Learning for LLM Fine-Tuning on Heterogeneous Mobile Devices

Published: June 3, 2025 | arXiv ID: 2506.02940v1

By: Xiaopei Chen , Liang Li , Fei Ji and more

Potential Business Impact:

Makes phones train big AI models faster.

Business Areas:
Mobile Devices Consumer Electronics, Hardware, Mobile

In this paper, we propose an edge-assisted split federated learning framework to facilitate large language model (LLM) fine-tuning on heterogeneous mobile devices while alleviating memory pressures on both mobile devices and the edge server. Specifically, mobile devices perform low-rank adaptation (LoRA) fine-tuning on only a subset of lower layers of the pre-trained LLM, tailored to their individual capacities. On the server, a full LLM is maintained, and the corresponding LoRA modules are selectively fine-tuned in a sequential manner for each device. To further enhance training efficiency, we propose a server-side training scheduling method that optimizes the processing order of devices for accelerating fine-tuning. Extensive experiments demonstrate that compared to the baselines, our scheme can reduce 79\% memory footprint and 6\% training time while achieving comparable performance.

Country of Origin
🇨🇳 China

Page Count
6 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing