Score: 0

Flexible Personalized Split Federated Learning for On-Device Fine-Tuning of Foundation Models

Published: August 14, 2025 | arXiv ID: 2508.10349v1

By: Tianjun Yuan , Jiaxiang Geng , Pengchao Han and more

Potential Business Impact:

Helps AI learn better from small, different data.

Fine-tuning foundation models is critical for superior performance on personalized downstream tasks, compared to using pre-trained models. Collaborative learning can leverage local clients' datasets for fine-tuning, but limited client data and heterogeneous data distributions hinder effective collaboration. To address the challenge, we propose a flexible personalized federated learning paradigm that enables clients to engage in collaborative learning while maintaining personalized objectives. Given the limited and heterogeneous computational resources available on clients, we introduce \textbf{flexible personalized split federated learning (FlexP-SFL)}. Based on split learning, FlexP-SFL allows each client to train a portion of the model locally while offloading the rest to a server, according to resource constraints. Additionally, we propose an alignment strategy to improve personalized model performance on global data. Experimental results show that FlexP-SFL outperforms baseline models in personalized fine-tuning efficiency and final accuracy.

Page Count
10 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing