Score: 0

Selective Attention Federated Learning: Improving Privacy and Efficiency for Clinical Text Classification

Published: April 16, 2025 | arXiv ID: 2504.11793v3

By: Yue Li, Lihong Zhang

Potential Business Impact:

Trains AI on private health data faster, safer.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Federated Learning (FL) faces major challenges regarding communication overhead and model privacy when training large language models (LLMs), especially in healthcare applications. To address these, we introduce Selective Attention Federated Learning (SAFL), a novel approach that dynamically fine-tunes only those transformer layers identified as attention-critical. By employing attention patterns to determine layer importance, SAFL significantly reduces communication bandwidth and enhances differential privacy resilience. Evaluations on clinical NLP benchmarks (i2b2 Clinical Concept Extraction and MIMIC-III discharge summaries) demonstrate that SAFL achieves competitive performance with centralized models while substantially improving communication efficiency and privacy preservation.

Page Count
6 pages

Category
Computer Science:
Computation and Language