Parameter-Efficient Fine-Tuning with Differential Privacy for Robust Instruction Adaptation in Large Language Models
By: Yulin Huang , Yaxuan Luan , Jinxu Guo and more
Potential Business Impact:
Keeps AI learning private and fast.
This study addresses the issues of privacy protection and efficiency in instruction fine-tuning of large-scale language models by proposing a parameter-efficient method that integrates differential privacy noise allocation with gradient clipping in a collaborative optimization framework. The method keeps the backbone model frozen and updates parameters through a low-dimensional projection subspace, while introducing clipping and adaptive noise allocation during gradient computation. This design reduces privacy budget consumption and ensures training stability and robustness. The unified framework combines gradient constraints, noise allocation, and parameter projection, effectively mitigating performance fluctuations and privacy risks in multi-task instruction scenarios. Experiments are conducted across hyperparameter, environment, and data sensitivity dimensions. Results show that the method outperforms baseline models in accuracy, privacy budget, and parameter efficiency, and maintains stable performance under diverse and uncertain data conditions. The findings enrich the theoretical integration of differential privacy and parameter-efficient fine-tuning and demonstrate its practical adaptability in instruction tasks, providing a feasible solution for secure training in complex instruction environments.
Similar Papers
Can Differentially Private Fine-tuning LLMs Protect Against Privacy Attacks?
Cryptography and Security
Keeps private info safe when AI learns new things.
Dual-Priv Pruning : Efficient Differential Private Fine-Tuning in Multimodal Large Language Models
Cryptography and Security
Keeps AI's private info safe while learning.
SA-ADP: Sensitivity-Aware Adaptive Differential Privacy for Large Language Models
Machine Learning (CS)
Protects private info without hurting computer smarts.