Score: 0

SA-ADP: Sensitivity-Aware Adaptive Differential Privacy for Large Language Models

Published: December 1, 2025 | arXiv ID: 2512.01748v1

By: Stella Etuk, Ashraf Matrawy

Potential Business Impact:

Protects private info without hurting computer smarts.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Despite advances in the use of large language models (LLMs) in downstream tasks, their ability to memorize information has raised privacy concerns. Therefore, protecting personally identifiable information (PII) during LLM training remains a fundamental challenge. Conventional methods like Differential Privacy-Stochastic Gradient Descent (DP-SGD) provide robust privacy protection via uniform noising, protecting PII regardless of its distinct sensitivity. This comes at the expense of the model's utility, leading to a trade-off. In this paper, we propose SA-ADP, a sensitivity-aware approach that allocates noise based on the sensitivity of individual PII. We evaluated our method on four datasets (ABCD, CUSTOMERSIM, Wikitext-2, and UNSW-NB15 ). Our results show that SA-ADP achieves results comparable to the baseline (No-DP) and the conventional DP-SGD. This means that our method did not degrade the model's utility while still maintaining strong privacy protection.

Country of Origin
🇨🇦 Canada

Page Count
5 pages

Category
Computer Science:
Machine Learning (CS)