Agentic Privacy-Preserving Machine Learning
By: Mengyu Zhang , Zhuotao Liu , Jingwen Huang and more
Potential Business Impact:
Makes AI understand private messages safely.
Privacy-preserving machine learning (PPML) is critical to ensure data privacy in AI. Over the past few years, the community has proposed a wide range of provably secure PPML schemes that rely on various cryptography primitives. However, when it comes to large language models (LLMs) with billions of parameters, the efficiency of PPML is everything but acceptable. For instance, the state-of-the-art solution for confidential LLM inference represents at least 10,000-fold slower performance compared to plaintext inference. The performance gap is even larger when the context length increases. In this position paper, we propose a novel framework named Agentic-PPML to make PPML in LLMs practical. Our key insight is to employ a general-purpose LLM for intent understanding and delegate cryptographically secure inference to specialized models trained on vertical domains. By modularly separating language intent parsing - which typically involves little or no sensitive information - from privacy-critical computation, Agentic-PPML completely eliminates the need for the LLMs to process the encrypted prompts, enabling practical deployment of privacy-preserving LLM-centric services.
Similar Papers
Towards Efficient Privacy-Preserving Machine Learning: A Systematic Review from Protocol, Model, and System Perspectives
Cryptography and Security
Keeps your data safe when computers learn.
Privacy Preserving Machine Learning Model Personalization through Federated Personalized Learning
Machine Learning (CS)
Keeps your private data safe when AI learns.
Beyond Data Privacy: New Privacy Risks for Large Language Models
Cryptography and Security
Protects your secrets from smart computer programs.