A Closer Look at Personalized Fine-Tuning in Heterogeneous Federated Learning
By: Minghui Chen , Hrad Ghoukasian , Ruinan Jin and more
Potential Business Impact:
Makes AI learn better for each person.
Federated Learning (FL) enables decentralized, privacy-preserving model training but struggles to balance global generalization and local personalization due to non-identical data distributions across clients. Personalized Fine-Tuning (PFT), a popular post-hoc solution, fine-tunes the final global model locally but often overfits to skewed client distributions or fails under domain shifts. We propose adapting Linear Probing followed by full Fine-Tuning (LP-FT), a principled centralized strategy for alleviating feature distortion (Kumar et al., 2022), to the FL setting. Through systematic evaluation across seven datasets and six PFT variants, we demonstrate LP-FT's superiority in balancing personalization and generalization. Our analysis uncovers federated feature distortion, a phenomenon where local fine-tuning destabilizes globally learned features, and theoretically characterizes how LP-FT mitigates this via phased parameter updates. We further establish conditions (e.g., partial feature overlap, covariate-concept shift) under which LP-FT outperforms standard fine-tuning, offering actionable guidelines for deploying robust personalization in FL.
Similar Papers
Flexible Personalized Split Federated Learning for On-Device Fine-Tuning of Foundation Models
Distributed, Parallel, and Cluster Computing
Helps AI learn better from small, different data.
FedHFT: Efficient Federated Finetuning with Heterogeneous Edge Clients
Machine Learning (CS)
Helps AI learn from private data securely.
FedHiP: Heterogeneity-Invariant Personalized Federated Learning Through Closed-Form Solutions
Machine Learning (CS)
Makes AI learn better even with messy data.