Public Data Assisted Differentially Private In-Context Learning
By: Seongho Joo, Hyukhun Koh, Kyomin Jung
Potential Business Impact:
Keeps private info safe while AI learns.
In-context learning (ICL) in Large Language Models (LLMs) has shown remarkable performance across various tasks without requiring fine-tuning. However, recent studies have highlighted the risk of private data leakage through the prompt in ICL, especially when LLMs are exposed to malicious attacks. While differential privacy (DP) provides strong privacy guarantees, it often significantly reduces the utility of in-context learning (ICL). To address this challenge, we incorporate task-related public data into the ICL framework while maintaining the DP guarantee. Based on this approach, we propose a private in-context learning algorithm that effectively balances privacy protection and model utility. Through experiments, we demonstrate that our approach significantly improves the utility of private ICL with the assistance of public data. Additionally, we show that our method is robust against membership inference attacks, demonstrating empirical privacy protection.
Similar Papers
Privacy-Aware In-Context Learning for Large Language Models
Machine Learning (CS)
Keeps your private writing safe from AI.
Privacy-Aware In-Context Learning for Large Language Models
Machine Learning (CS)
Keeps private text secret when computers write.
Differentially Private In-Context Learning with Nearest Neighbor Search
Machine Learning (CS)
Protects your private info when AI learns.