Score: 1

FedRE: A Representation Entanglement Framework for Model-Heterogeneous Federated Learning

Published: November 27, 2025 | arXiv ID: 2511.22265v1

By: Yuan Yao , Lixu Wang , Jiaqi Wu and more

Potential Business Impact:

Lets computers learn together without sharing private info.

Business Areas:
Facial Recognition Data and Analytics, Software

Federated learning (FL) enables collaborative training across clients without compromising privacy. While most existing FL methods assume homogeneous model architectures, client heterogeneity in data and resources renders this assumption impractical, motivating model-heterogeneous FL. To address this problem, we propose Federated Representation Entanglement (FedRE), a framework built upon a novel form of client knowledge termed entangled representation. In FedRE, each client aggregates its local representations into a single entangled representation using normalized random weights and applies the same weights to integrate the corresponding one-hot label encodings into the entangled-label encoding. Those are then uploaded to the server to train a global classifier. During training, each entangled representation is supervised across categories via its entangled-label encoding, while random weights are resampled each round to introduce diversity, mitigating the global classifier's overconfidence and promoting smoother decision boundaries. Furthermore, each client uploads a single cross-category entangled representation along with its entangled-label encoding, mitigating the risk of representation inversion attacks and reducing communication overhead. Extensive experiments demonstrate that FedRE achieves an effective trade-off among model performance, privacy protection, and communication overhead. The codes are available at https://github.com/AIResearch-Group/FedRE.

Repos / Data Links

Page Count
15 pages

Category
Computer Science:
Machine Learning (CS)