Guarding Your Conversations: Privacy Gatekeepers for Secure Interactions with Cloud-Based AI Models
By: GodsGift Uzor , Hasan Al-Qudah , Ynes Ineza and more
Potential Business Impact:
Keeps your private chat info safe from AI.
The interactive nature of Large Language Models (LLMs), which closely track user data and context, has prompted users to share personal and private information in unprecedented ways. Even when users opt out of allowing their data to be used for training, these privacy settings offer limited protection when LLM providers operate in jurisdictions with weak privacy laws, invasive government surveillance, or poor data security practices. In such cases, the risk of sensitive information, including Personally Identifiable Information (PII), being mishandled or exposed remains high. To address this, we propose the concept of an "LLM gatekeeper", a lightweight, locally run model that filters out sensitive information from user queries before they are sent to the potentially untrustworthy, though highly capable, cloud-based LLM. Through experiments with human subjects, we demonstrate that this dual-model approach introduces minimal overhead while significantly enhancing user privacy, without compromising the quality of LLM responses.
Similar Papers
Beyond Data Privacy: New Privacy Risks for Large Language Models
Cryptography and Security
Protects your secrets from smart computer programs.
The Gatekeeper Knows Enough
Artificial Intelligence
Helps AI agents remember and use big information.
SoK: The Privacy Paradox of Large Language Models: Advancements, Privacy Risks, and Mitigation
Cryptography and Security
Keeps your private info safe from smart computer programs.