CAFEDistill: Learning Personalized and Dynamic Models through Federated Early-Exit Network Distillation
By: Boyi Liu, Zimu Zhou, Yongxin Tong
Personalized Federated Learning (PFL) enables collaboratively model training on decentralized, heterogeneous data while tailoring them to each client's unique distribution. However, existing PFL methods produce static models with a fixed tradeoff between accuracy and efficiency, limiting their applicability in environments where inference requirements vary with contexts and resource availability. Early-exit networks (EENs) offer adaptive inference by attaching intermediate classifiers. Yet integrating them into PFL is challenging due to client-wise heterogeneity and depth-wise interference arising from conflicting exit objectives. Prior studies fail to resolve both conflicts simultaneously, leading to suboptimal performance. In this paper, we propose CAFEDistill, a Conflict-Aware Federated Exit Distillation framework that jointly addresses these conflicts and extends PFL to early-exit networks. Through a progressive, depth-prioritized student coordination mechanism, CAFEDistill mitigates interference among shallow and deep exits while allowing effective personalized knowledge transfer across clients. Furthermore, it reduces communication overhead via a client-decoupled formulation. Extensive evaluations show that CAFEDistill outperforms the state-of-the-arts, achieving higher accuracy and reducing inference costs by 30.79%-46.86%.
Similar Papers
Hybrid Federated Learning for Noise-Robust Training
Machine Learning (CS)
Helps phones learn together without sharing private info.
pMixFed: Efficient Personalized Federated Learning through Adaptive Layer-Wise Mixup
Machine Learning (CS)
Helps AI learn better from different people's data.
pFedDSH: Enabling Knowledge Transfer in Personalized Federated Learning through Data-free Sub-Hypernetwork
Machine Learning (CS)
Helps AI learn from new users without seeing their private info.