Score: 2

FedKDX: Federated Learning with Negative Knowledge Distillation for Enhanced Healthcare AI Systems

Published: January 8, 2026 | arXiv ID: 2601.04587v1

By: Quang-Tu Pham , Hoang-Dieu Vu , Dinh-Dat Pham and more

Potential Business Impact:

Helps doctors share AI secrets safely.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

This paper introduces FedKDX, a federated learning framework that addresses limitations in healthcare AI through Negative Knowledge Distillation (NKD). Unlike existing approaches that focus solely on positive knowledge transfer, FedKDX captures both target and non-target information to improve model generalization in healthcare applications. The framework integrates multiple knowledge transfer techniques--including traditional knowledge distillation, contrastive learning, and NKD--within a unified architecture that maintains privacy while reducing communication costs. Through experiments on healthcare datasets (SLEEP, UCI-HAR, and PAMAP2), FedKDX demonstrates improved accuracy (up to 2.53% over state-of-the-art methods), faster convergence, and better performance on non-IID data distributions. Theoretical analysis supports NKD's contribution to addressing statistical heterogeneity in distributed healthcare data. The approach shows promise for privacy-sensitive medical applications under regulatory frameworks like HIPAA and GDPR, offering a balanced solution between performance and practical implementation requirements in decentralized healthcare settings. The code and model are available at https://github.com/phamdinhdat-ai/Fed_2024.

Repos / Data Links

Page Count
18 pages

Category
Computer Science:
Machine Learning (CS)