FedQuad: Federated Stochastic Quadruplet Learning to Mitigate Data Heterogeneity
By: Ozgu Goksu, Nicolas Pugeault
Potential Business Impact:
Makes AI learn better from many different computers.
Federated Learning (FL) provides decentralised model training, which effectively tackles problems such as distributed data and privacy preservation. However, the generalisation of global models frequently faces challenges from data heterogeneity among clients. This challenge becomes even more pronounced when datasets are limited in size and class imbalance. To address data heterogeneity, we propose a novel method, \textit{FedQuad}, that explicitly optimises smaller intra-class variance and larger inter-class variance across clients, thereby decreasing the negative impact of model aggregation on the global model over client representations. Our approach minimises the distance between similar pairs while maximising the distance between negative pairs, effectively disentangling client data in the shared feature space. We evaluate our method on the CIFAR-10 and CIFAR-100 datasets under various data distributions and with many clients, demonstrating superior performance compared to existing approaches. Furthermore, we provide a detailed analysis of metric learning-based strategies within both supervised and federated learning paradigms, highlighting their efficacy in addressing representational learning challenges in federated settings.
Similar Papers
Enhancing Federated Learning Privacy with QUBO
Machine Learning (CS)
Keeps private data safer when training computers.
FedDiverse: Tackling Data Heterogeneity in Federated Learning with Diversity-Driven Client Selection
Machine Learning (CS)
Helps AI learn better from different data.
Towards Heterogeneous Quantum Federated Learning: Challenges and Solutions
Quantum Physics
Makes quantum computers learn together better.