Score: 1

A Closer Look at Personalized Fine-Tuning in Heterogeneous Federated Learning

Published: November 16, 2025 | arXiv ID: 2511.12695v1

By: Minghui Chen , Hrad Ghoukasian , Ruinan Jin and more

Potential Business Impact:

Makes AI learn better for each person.

Business Areas:
Personalization Commerce and Shopping

Federated Learning (FL) enables decentralized, privacy-preserving model training but struggles to balance global generalization and local personalization due to non-identical data distributions across clients. Personalized Fine-Tuning (PFT), a popular post-hoc solution, fine-tunes the final global model locally but often overfits to skewed client distributions or fails under domain shifts. We propose adapting Linear Probing followed by full Fine-Tuning (LP-FT), a principled centralized strategy for alleviating feature distortion (Kumar et al., 2022), to the FL setting. Through systematic evaluation across seven datasets and six PFT variants, we demonstrate LP-FT's superiority in balancing personalization and generalization. Our analysis uncovers federated feature distortion, a phenomenon where local fine-tuning destabilizes globally learned features, and theoretically characterizes how LP-FT mitigates this via phased parameter updates. We further establish conditions (e.g., partial feature overlap, covariate-concept shift) under which LP-FT outperforms standard fine-tuning, offering actionable guidelines for deploying robust personalization in FL.

Repos / Data Links

Page Count
33 pages

Category
Computer Science:
Machine Learning (CS)