Score: 0

FedMentor: Domain-Aware Differential Privacy for Heterogeneous Federated LLMs in Mental Health

Published: September 16, 2025 | arXiv ID: 2509.14275v1

By: Nobin Sarwar, Shubhashis Roy Dipta

Potential Business Impact:

Keeps mental health AI private and safe.

Business Areas:
Darknet Internet Services

Privacy-preserving adaptation of Large Language Models (LLMs) in sensitive domains (e.g., mental health) requires balancing strict confidentiality with model utility and safety. We propose FedMentor, a federated fine-tuning framework that integrates Low-Rank Adaptation (LoRA) and domain-aware Differential Privacy (DP) to meet per-domain privacy budgets while maintaining performance. Each client (domain) applies a custom DP noise scale proportional to its data sensitivity, and the server adaptively reduces noise when utility falls below a threshold. In experiments on three mental health datasets, we show that FedMentor improves safety over standard Federated Learning without privacy, raising safe output rates by up to three points and lowering toxicity, while maintaining utility (BERTScore F1 and ROUGE-L) within 0.5% of the non-private baseline and close to the centralized upper bound. The framework scales to backbones with up to 1.7B parameters on single-GPU clients, requiring < 173 MB of communication per round. FedMentor demonstrates a practical approach to privately fine-tune LLMs for safer deployments in healthcare and other sensitive fields.

Country of Origin
🇺🇸 United States

Page Count
18 pages

Category
Computer Science:
Cryptography and Security