Score: 0

ContextLeak: Auditing Leakage in Private In-Context Learning Methods

Published: December 18, 2025 | arXiv ID: 2512.16059v1

By: Jacob Choi , Shuying Cao , Xingjian Dong and more

Potential Business Impact:

Tests if AI leaks private info from examples.

Business Areas:
Cloud Security Information Technology, Privacy and Security

In-Context Learning (ICL) has become a standard technique for adapting Large Language Models (LLMs) to specialized tasks by supplying task-specific exemplars within the prompt. However, when these exemplars contain sensitive information, reliable privacy-preserving mechanisms are essential to prevent unintended leakage through model outputs. Many privacy-preserving methods are proposed to protect the information leakage in the context, but there are less efforts on how to audit those methods. We introduce ContextLeak, the first framework to empirically measure the worst-case information leakage in ICL. ContextLeak uses canary insertion, embedding uniquely identifiable tokens in exemplars and crafting targeted queries to detect their presence. We apply ContextLeak across a range of private ICL techniques, both heuristic such as prompt-based defenses and those with theoretical guarantees such as Embedding Space Aggregation and Report Noisy Max. We find that ContextLeak tightly correlates with the theoretical privacy budget ($ε$) and reliably detects leakage. Our results further reveal that existing methods often strike poor privacy-utility trade-offs, either leaking sensitive information or severely degrading performance.

Country of Origin
🇺🇸 United States

Page Count
18 pages

Category
Computer Science:
Cryptography and Security