DP-FedLoRA: Privacy-Enhanced Federated Fine-Tuning for On-Device Large Language Models
By: Honghui Xu , Shiva Shrestha , Wei Chen and more
Potential Business Impact:
Keeps your phone's smart talk private.
As on-device large language model (LLM) systems become increasingly prevalent, federated fine-tuning enables advanced language understanding and generation directly on edge devices; however, it also involves processing sensitive, user-specific data, raising significant privacy concerns within the federated learning framework. To address these challenges, we propose DP-FedLoRA, a privacy-enhanced federated fine-tuning framework that integrates LoRA-based adaptation with differential privacy in a communication-efficient setting. Each client locally clips and perturbs its LoRA matrices using Gaussian noise to satisfy ($\epsilon$, $\delta$)-differential privacy. We further provide a theoretical analysis demonstrating the unbiased nature of the updates and deriving bounds on the variance introduced by noise, offering practical guidance for privacy-budget calibration. Experimental results across mainstream benchmarks show that DP-FedLoRA delivers competitive performance while offering strong privacy guarantees, paving the way for scalable and privacy-preserving LLM deployment in on-device environments.
Similar Papers
FedLoRA-Optimizer: Federated LoRA Fine-Tuning with Global and Local Optimization in Heterogeneous Data Scenarios
Machine Learning (CS)
Improves AI learning from many different users.
FedMentor: Domain-Aware Differential Privacy for Heterogeneous Federated LLMs in Mental Health
Cryptography and Security
Keeps mental health AI private and safe.
EcoLoRA: Communication-Efficient Federated Fine-Tuning of Large Language Models
Distributed, Parallel, and Cluster Computing
Makes AI learn faster with less data sent.