Score: 1

Towards Confidential and Efficient LLM Inference with Dual Privacy Protection

Published: September 11, 2025 | arXiv ID: 2509.09091v1

By: Honglan Yu , Yibin Wang , Feifei Dai and more

Potential Business Impact:

Keeps your private data safe during AI use.

Business Areas:
Cloud Security Information Technology, Privacy and Security

CPU-based trusted execution environments (TEEs) and differential privacy (DP) have gained wide applications for private inference. Due to high inference latency in TEEs, researchers use partition-based approaches that offload linear model components to GPUs. However, dense nonlinear layers of large language models (LLMs) result in significant communication overhead between TEEs and GPUs. DP-based approaches apply random noise to protect data privacy, but this compromises LLM performance and semantic understanding. To overcome the above drawbacks, this paper proposes CMIF, a Confidential and efficient Model Inference Framework. CMIF confidentially deploys the embedding layer in the client-side TEE and subsequent layers on GPU servers. Meanwhile, it optimizes the Report-Noisy-Max mechanism to protect sensitive inputs with a slight decrease in model performance. Extensive experiments on Llama-series models demonstrate that CMIF reduces additional inference overhead in TEEs while preserving user data privacy.

Repos / Data Links

Page Count
10 pages

Category
Computer Science:
Cryptography and Security