FedHFT: Efficient Federated Finetuning with Heterogeneous Edge Clients
By: Fatih Ilhan , Selim Furkan Tekin , Tiansheng Huang and more
Potential Business Impact:
Helps AI learn from private data securely.
Fine-tuning pre-trained large language models (LLMs) has become a common practice for personalized natural language understanding (NLU) applications on downstream tasks and domain-specific datasets. However, there are two main challenges: (i) limited and/or heterogeneous data for fine-tuning due to proprietary data confidentiality or privacy requirements, and (ii) varying computation resources available across participating clients such as edge devices. This paper presents FedHFT - an efficient and personalized federated fine-tuning framework to address both challenges. First, we introduce a mixture of masked adapters to handle resource heterogeneity across participating clients, enabling high-performance collaborative fine-tuning of pre-trained language model(s) across multiple clients in a distributed setting, while keeping proprietary data local. Second, we introduce a bi-level optimization approach to handle non-iid data distribution based on masked personalization and client clustering. Extensive experiments demonstrate significant performance and efficiency improvements over various natural language understanding tasks under data and resource heterogeneity compared to representative heterogeneous federated learning methods.
Similar Papers
Fed MobiLLM: Efficient Federated LLM Fine-Tuning over Heterogeneous Mobile Devices via Server Assisted Side-Tuning
Machine Learning (CS)
Makes phones smarter without slowing them down.
Learning Like Humans: Resource-Efficient Federated Fine-Tuning through Cognitive Developmental Stages
Machine Learning (CS)
Makes smart computer programs learn faster, cheaper.
A Survey on Federated Fine-tuning of Large Language Models
Machine Learning (CS)
Teaches computers to learn together, keeping secrets safe.