Adaptive Backtracking for Privacy Protection in Large Language Models
By: Zhihao Yao , Yuxuan Gu , Xiachong Feng and more
Potential Business Impact:
Keeps company secrets safe from smart computer programs.
The preservation of privacy has emerged as a critical topic in the era of artificial intelligence. However, current work focuses on user-oriented privacy, overlooking severe enterprise data leakage risks exacerbated by the Retrieval-Augmented Generation paradigm. To address this gap, our paper introduces a novel objective: enterprise-oriented privacy concerns. Achieving this objective requires overcoming two fundamental challenges: existing methods such as data sanitization severely degrade model performance, and the field lacks public datasets for evaluation. We address these challenges with several solutions. (1) To prevent performance degradation, we propose ABack, a training-free mechanism that leverages a Hidden State Model to pinpoint the origin of a leakage intention and rewrite the output safely. (2) To solve the lack of datasets, we construct PriGenQA, a new benchmark for enterprise privacy scenarios in healthcare and finance. To ensure a rigorous evaluation, we move beyond simple static attacks by developing a powerful adaptive attacker with Group Relative Policy Optimization. Experiments show that against this superior adversary, ABack improves the overall privacy utility score by up to 15\% over strong baselines, avoiding the performance trade-offs of prior methods.
Similar Papers
SA-ADP: Sensitivity-Aware Adaptive Differential Privacy for Large Language Models
Machine Learning (CS)
Protects private info without hurting computer smarts.
Privacy Preservation in Gen AI Applications
Cryptography and Security
Keeps your private info safe from smart computer programs.
Privacy-Aware In-Context Learning for Large Language Models
Machine Learning (CS)
Keeps your private writing safe from AI.