Selective Attention Federated Learning: Improving Privacy and Efficiency for Clinical Text Classification
By: Yue Li, Lihong Zhang
Potential Business Impact:
Trains AI on private health data faster, safer.
Federated Learning (FL) faces major challenges regarding communication overhead and model privacy when training large language models (LLMs), especially in healthcare applications. To address these, we introduce Selective Attention Federated Learning (SAFL), a novel approach that dynamically fine-tunes only those transformer layers identified as attention-critical. By employing attention patterns to determine layer importance, SAFL significantly reduces communication bandwidth and enhances differential privacy resilience. Evaluations on clinical NLP benchmarks (i2b2 Clinical Concept Extraction and MIMIC-III discharge summaries) demonstrate that SAFL achieves competitive performance with centralized models while substantially improving communication efficiency and privacy preservation.
Similar Papers
Federated Learning with Layer Skipping: Efficient Training of Large Language Models for Healthcare NLP
Machine Learning (CS)
Trains AI for doctors without sharing patient info.
Single-Round Scalable Analytic Federated Learning
Machine Learning (CS)
Trains AI faster without sharing private data.
Can Federated Learning Safeguard Private Data in LLM Training? Vulnerabilities, Attacks, and Defense Evaluation
Machine Learning (CS)
Steals private info from shared AI training.