A Survey on Parameter-Efficient Fine-Tuning for Foundation Models in Federated Learning
By: Jieming Bian , Yuanzhe Peng , Lei Wang and more
Potential Business Impact:
Trains smart AI models faster, using less power.
Foundation models have revolutionized artificial intelligence by providing robust, versatile architectures pre-trained on large-scale datasets. However, adapting these massive models to specific downstream tasks requires fine-tuning, which can be prohibitively expensive in computational resources. Parameter-Efficient Fine-Tuning (PEFT) methods address this challenge by selectively updating only a small subset of parameters. Meanwhile, Federated Learning (FL) enables collaborative model training across distributed clients without sharing raw data, making it ideal for privacy-sensitive applications. This survey provides a comprehensive review of the integration of PEFT techniques within federated learning environments. We systematically categorize existing approaches into three main groups: Additive PEFT (which introduces new trainable parameters), Selective PEFT (which fine-tunes only subsets of existing parameters), and Reparameterized PEFT (which transforms model architectures to enable efficient updates). For each category, we analyze how these methods address the unique challenges of federated settings, including data heterogeneity, communication efficiency, computational constraints, and privacy concerns. We further organize the literature based on application domains, covering both natural language processing and computer vision tasks. Finally, we discuss promising research directions, including scaling to larger foundation models, theoretical analysis of federated PEFT methods, and sustainable approaches for resource-constrained environments.
Similar Papers
PEFT A2Z: Parameter-Efficient Fine-Tuning Survey for Large Language and Vision Models
Computation and Language
Makes big AI models learn new things cheaply.
Fine-tune Smarter, Not Harder: Parameter-Efficient Fine-Tuning for Geospatial Foundation Models
CV and Pattern Recognition
Makes Earth-watching computers learn faster, cheaper.
Parameter-Efficient Continual Fine-Tuning: A Survey
Machine Learning (CS)
AI learns new things without forgetting old ones.