Score: 1

Data Heterogeneity and Forgotten Labels in Split Federated Learning

Published: November 12, 2025 | arXiv ID: 2511.09736v1

By: Joana Tirana , Dimitra Tsigkari , David Solans Noguero and more

Potential Business Impact:

Fixes AI forgetting what it learned before.

Business Areas:
A/B Testing Data and Analytics

In Split Federated Learning (SFL), the clients collaboratively train a model with the help of a server by splitting the model into two parts. Part-1 is trained locally at each client and aggregated by the aggregator at the end of each round. Part-2 is trained at a server that sequentially processes the intermediate activations received from each client. We study the phenomenon of catastrophic forgetting (CF) in SFL in the presence of data heterogeneity. In detail, due to the nature of SFL, local updates of part-1 may drift away from global optima, while part-2 is sensitive to the processing sequence, similar to forgetting in continual learning (CL). Specifically, we observe that the trained model performs better in classes (labels) seen at the end of the sequence. We investigate this phenomenon with emphasis on key aspects of SFL, such as the processing order at the server and the cut layer. Based on our findings, we propose Hydra, a novel mitigation method inspired by multi-head neural networks and adapted for the SFL's setting. Extensive numerical evaluations show that Hydra outperforms baselines and methods from the literature.

Repos / Data Links

Page Count
27 pages

Category
Computer Science:
Machine Learning (CS)