Data Heterogeneity and Forgotten Labels in Split Federated Learning
By: Joana Tirana , Dimitra Tsigkari , David Solans Noguero and more
Potential Business Impact:
Fixes AI forgetting what it learned before.
In Split Federated Learning (SFL), the clients collaboratively train a model with the help of a server by splitting the model into two parts. Part-1 is trained locally at each client and aggregated by the aggregator at the end of each round. Part-2 is trained at a server that sequentially processes the intermediate activations received from each client. We study the phenomenon of catastrophic forgetting (CF) in SFL in the presence of data heterogeneity. In detail, due to the nature of SFL, local updates of part-1 may drift away from global optima, while part-2 is sensitive to the processing sequence, similar to forgetting in continual learning (CL). Specifically, we observe that the trained model performs better in classes (labels) seen at the end of the sequence. We investigate this phenomenon with emphasis on key aspects of SFL, such as the processing order at the server and the cut layer. Based on our findings, we propose Hydra, a novel mitigation method inspired by multi-head neural networks and adapted for the SFL's setting. Extensive numerical evaluations show that Hydra outperforms baselines and methods from the literature.
Similar Papers
Collaborative Split Federated Learning with Parallel Training and Aggregation
Distributed, Parallel, and Cluster Computing
Trains AI faster with smarter teamwork.
Accelerating Wireless Distributed Learning via Hybrid Split and Federated Learning Optimization
Machine Learning (CS)
Makes smart devices learn faster together.
Federated Split Learning with Improved Communication and Storage Efficiency
Machine Learning (CS)
Trains AI smarter with less data sent.