Flexible Personalized Split Federated Learning for On-Device Fine-Tuning of Foundation Models
By: Tianjun Yuan , Jiaxiang Geng , Pengchao Han and more
Potential Business Impact:
Helps AI learn better from small, different data.
Fine-tuning foundation models is critical for superior performance on personalized downstream tasks, compared to using pre-trained models. Collaborative learning can leverage local clients' datasets for fine-tuning, but limited client data and heterogeneous data distributions hinder effective collaboration. To address the challenge, we propose a flexible personalized federated learning paradigm that enables clients to engage in collaborative learning while maintaining personalized objectives. Given the limited and heterogeneous computational resources available on clients, we introduce \textbf{flexible personalized split federated learning (FlexP-SFL)}. Based on split learning, FlexP-SFL allows each client to train a portion of the model locally while offloading the rest to a server, according to resource constraints. Additionally, we propose an alignment strategy to improve personalized model performance on global data. Experimental results show that FlexP-SFL outperforms baseline models in personalized fine-tuning efficiency and final accuracy.
Similar Papers
Collaborative Split Federated Learning with Parallel Training and Aggregation
Distributed, Parallel, and Cluster Computing
Trains AI faster with smarter teamwork.
A Closer Look at Personalized Fine-Tuning in Heterogeneous Federated Learning
Machine Learning (CS)
Makes AI learn better for each person.
Not All Clients Are Equal: Personalized Federated Learning on Heterogeneous Multi-Modal Clients
Machine Learning (CS)
AI learns from everyone without sharing private data.