FedKDX: Federated Learning with Negative Knowledge Distillation for Enhanced Healthcare AI Systems
By: Quang-Tu Pham , Hoang-Dieu Vu , Dinh-Dat Pham and more
Potential Business Impact:
Helps doctors share AI secrets safely.
This paper introduces FedKDX, a federated learning framework that addresses limitations in healthcare AI through Negative Knowledge Distillation (NKD). Unlike existing approaches that focus solely on positive knowledge transfer, FedKDX captures both target and non-target information to improve model generalization in healthcare applications. The framework integrates multiple knowledge transfer techniques--including traditional knowledge distillation, contrastive learning, and NKD--within a unified architecture that maintains privacy while reducing communication costs. Through experiments on healthcare datasets (SLEEP, UCI-HAR, and PAMAP2), FedKDX demonstrates improved accuracy (up to 2.53% over state-of-the-art methods), faster convergence, and better performance on non-IID data distributions. Theoretical analysis supports NKD's contribution to addressing statistical heterogeneity in distributed healthcare data. The approach shows promise for privacy-sensitive medical applications under regulatory frameworks like HIPAA and GDPR, offering a balanced solution between performance and practical implementation requirements in decentralized healthcare settings. The code and model are available at https://github.com/phamdinhdat-ai/Fed_2024.
Similar Papers
HFedCKD: Toward Robust Heterogeneous Federated Learning via Data-free Knowledge Distillation and Two-way Contrast
Machine Learning (CS)
Helps AI learn from more phones, better.
FedKD-hybrid: Federated Hybrid Knowledge Distillation for Lithography Hotspot Detection
Machine Learning (CS)
Finds tiny flaws in computer chips safely.
Robust Knowledge Distillation in Federated Learning: Counteracting Backdoor Attacks
Cryptography and Security
Stops bad guys from secretly messing up shared AI.