APFL: Analytic Personalized Federated Learning via Dual-Stream Least Squares
By: Kejia Fan , Jianheng Tang , Zhirui Yang and more
Potential Business Impact:
Makes AI learn better for everyone, even with different data.
Personalized Federated Learning (PFL) has presented a significant challenge to deliver personalized models to individual clients through collaborative training. Existing PFL methods are often vulnerable to non-IID data, which severely hinders collective generalization and then compromises the subsequent personalization efforts. In this paper, to address this non-IID issue in PFL, we propose an Analytic Personalized Federated Learning (APFL) approach via dual-stream least squares. In our APFL, we use a foundation model as a frozen backbone for feature extraction. Subsequent to the feature extractor, we develop dual-stream analytic models to achieve both collective generalization and individual personalization. Specifically, our APFL incorporates a shared primary stream for global generalization across all clients, and a dedicated refinement stream for local personalization of each individual client. The analytical solutions of our APFL enable its ideal property of heterogeneity invariance, theoretically meaning that each personalized model remains identical regardless of how heterogeneous the data are distributed across all other clients. Empirical results across various datasets also validate the superiority of our APFL over state-of-the-art baselines, with advantages of at least 1.10%-15.45% in accuracy.
Similar Papers
CO-PFL: Contribution-Oriented Personalized Federated Learning for Heterogeneous Networks
Machine Learning (CS)
Makes AI learn better from everyone's unique data.
Single-Round Scalable Analytic Federated Learning
Machine Learning (CS)
Trains AI faster without sharing private data.
Not All Clients Are Equal: Personalized Federated Learning on Heterogeneous Multi-Modal Clients
Machine Learning (CS)
AI learns from everyone without sharing private data.