Score: 0

Public Data Assisted Differentially Private In-Context Learning

Published: September 13, 2025 | arXiv ID: 2509.10932v1

By: Seongho Joo, Hyukhun Koh, Kyomin Jung

Potential Business Impact:

Keeps private info safe while AI learns.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

In-context learning (ICL) in Large Language Models (LLMs) has shown remarkable performance across various tasks without requiring fine-tuning. However, recent studies have highlighted the risk of private data leakage through the prompt in ICL, especially when LLMs are exposed to malicious attacks. While differential privacy (DP) provides strong privacy guarantees, it often significantly reduces the utility of in-context learning (ICL). To address this challenge, we incorporate task-related public data into the ICL framework while maintaining the DP guarantee. Based on this approach, we propose a private in-context learning algorithm that effectively balances privacy protection and model utility. Through experiments, we demonstrate that our approach significantly improves the utility of private ICL with the assistance of public data. Additionally, we show that our method is robust against membership inference attacks, demonstrating empirical privacy protection.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
24 pages

Category
Computer Science:
Artificial Intelligence