1-2-3 Check: Enhancing Contextual Privacy in LLM via Multi-Agent Reasoning
By: Wenkai Li , Liwen Sun , Zhenxiang Guan and more
Potential Business Impact:
Keeps private talk secret when computers help.
Addressing contextual privacy concerns remains challenging in interactive settings where large language models (LLMs) process information from multiple sources (e.g., summarizing meetings with private and public information). We introduce a multi-agent framework that decomposes privacy reasoning into specialized subtasks (extraction, classification), reducing the information load on any single agent while enabling iterative validation and more reliable adherence to contextual privacy norms. To understand how privacy errors emerge and propagate, we conduct a systematic ablation over information-flow topologies, revealing when and why upstream detection mistakes cascade into downstream leakage. Experiments on the ConfAIde and PrivacyLens benchmark with several open-source and closed-sourced LLMs demonstrate that our best multi-agent configuration substantially reduces private information leakage (\textbf{18\%} on ConfAIde and \textbf{19\%} on PrivacyLens with GPT-4o) while preserving the fidelity of public content, outperforming single-agent baselines. These results highlight the promise of principled information-flow design in multi-agent systems for contextual privacy with LLMs.
Similar Papers
Position: Privacy Is Not Just Memorization!
Cryptography and Security
Protects your secrets from smart computer programs.
Privacy-Aware In-Context Learning for Large Language Models
Machine Learning (CS)
Keeps your private writing safe from AI.
Contextual Integrity in LLMs via Reasoning and Reinforcement Learning
Artificial Intelligence
Teaches AI what private info to share.