ContextLeak: Auditing Leakage in Private In-Context Learning Methods
By: Jacob Choi , Shuying Cao , Xingjian Dong and more
Potential Business Impact:
Tests if AI leaks private info from examples.
In-Context Learning (ICL) has become a standard technique for adapting Large Language Models (LLMs) to specialized tasks by supplying task-specific exemplars within the prompt. However, when these exemplars contain sensitive information, reliable privacy-preserving mechanisms are essential to prevent unintended leakage through model outputs. Many privacy-preserving methods are proposed to protect the information leakage in the context, but there are less efforts on how to audit those methods. We introduce ContextLeak, the first framework to empirically measure the worst-case information leakage in ICL. ContextLeak uses canary insertion, embedding uniquely identifiable tokens in exemplars and crafting targeted queries to detect their presence. We apply ContextLeak across a range of private ICL techniques, both heuristic such as prompt-based defenses and those with theoretical guarantees such as Embedding Space Aggregation and Report Noisy Max. We find that ContextLeak tightly correlates with the theoretical privacy budget ($ε$) and reliably detects leakage. Our results further reveal that existing methods often strike poor privacy-utility trade-offs, either leaking sensitive information or severely degrading performance.
Similar Papers
Tight and Practical Privacy Auditing for Differentially Private In-Context Learning
Cryptography and Security
Checks if AI models leak private information.
Public Data Assisted Differentially Private In-Context Learning
Artificial Intelligence
Keeps private info safe while AI learns.
Privacy-Aware In-Context Learning for Large Language Models
Machine Learning (CS)
Keeps your private writing safe from AI.