Towards Instance-wise Personalized Federated Learning via Semi-Implicit Bayesian Prompt Tuning
By: Tiandi Ye , Wenyan Liu , Kai Yao and more
Potential Business Impact:
Helps AI learn from many different data types.
Federated learning (FL) is a privacy-preserving machine learning paradigm that enables collaborative model training across multiple distributed clients without disclosing their raw data. Personalized federated learning (pFL) has gained increasing attention for its ability to address data heterogeneity. However, most existing pFL methods assume that each client's data follows a single distribution and learn one client-level personalized model for each client. This assumption often fails in practice, where a single client may possess data from multiple sources or domains, resulting in significant intra-client heterogeneity and suboptimal performance. To tackle this challenge, we propose pFedBayesPT, a fine-grained instance-wise pFL framework based on visual prompt tuning. Specifically, we formulate instance-wise prompt generation from a Bayesian perspective and model the prompt posterior as an implicit distribution to capture diverse visual semantics. We derive a variational training objective under the semi-implicit variational inference framework. Extensive experiments on benchmark datasets demonstrate that pFedBayesPT consistently outperforms existing pFL methods under both feature and label heterogeneity settings.
Similar Papers
A Closer Look at Personalized Fine-Tuning in Heterogeneous Federated Learning
Machine Learning (CS)
Makes AI learn better for each person.
AutoFed: Manual-Free Federated Traffic Prediction via Personalized Prompt
Machine Learning (CS)
Helps traffic apps learn from everyone's data safely.
pFedDSH: Enabling Knowledge Transfer in Personalized Federated Learning through Data-free Sub-Hypernetwork
Machine Learning (CS)
Helps AI learn from new users without seeing their private info.