Federated Domain Generalization with Latent Space Inversion
By: Ragja Palakkadavath , Hung Le , Thanh Nguyen-Tang and more
Potential Business Impact:
Keeps private data safe while learning from many.
Federated domain generalization (FedDG) addresses distribution shifts among clients in a federated learning framework. FedDG methods aggregate the parameters of locally trained client models to form a global model that generalizes to unseen clients while preserving data privacy. While improving the generalization capability of the global model, many existing approaches in FedDG jeopardize privacy by sharing statistics of client data between themselves. Our solution addresses this problem by contributing new ways to perform local client training and model aggregation. To improve local client training, we enforce (domain) invariance across local models with the help of a novel technique, \textbf{latent space inversion}, which enables better client privacy. When clients are not \emph{i.i.d}, aggregating their local models may discard certain local adaptations. To overcome this, we propose an \textbf{important weight} aggregation strategy to prioritize parameters that significantly influence predictions of local models during aggregation. Our extensive experiments show that our approach achieves superior results over state-of-the-art methods with less communication overhead.
Similar Papers
TreeFedDG: Alleviating Global Drift in Federated Domain Generalization for Medical Image Segmentation
CV and Pattern Recognition
Helps doctors see diseases better in scans.
HFedATM: Hierarchical Federated Domain Generalization via Optimal Transport and Regularized Mean Aggregation
Machine Learning (CS)
Helps AI learn from many devices without sharing data.
Federated Learning with Domain Shift Eraser
CV and Pattern Recognition
Fixes AI learning from different sources.