SecureInfer: Heterogeneous TEE-GPU Architecture for Privacy-Critical Tensors for Large Language Model Deployment
By: Tushar Nayan, Ziqi Zhang, Ruimin Sun
Potential Business Impact:
Keeps AI private on phones, still fast.
With the increasing deployment of Large Language Models (LLMs) on mobile and edge platforms, securing them against model extraction attacks has become a pressing concern. However, protecting model privacy without sacrificing the performance benefits of untrusted AI accelerators, such as GPUs, presents a challenging trade-off. In this paper, we initiate the study of high-performance execution on LLMs and present SecureInfer, a hybrid framework that leverages a heterogeneous Trusted Execution Environments (TEEs)-GPU architecture to isolate privacy-critical components while offloading compute-intensive operations to untrusted accelerators. Building upon an outsourcing scheme, SecureInfer adopts an information-theoretic and threat-informed partitioning strategy: security-sensitive components, including non-linear layers, projection of attention head, FNN transformations, and LoRA adapters, are executed inside an SGX enclave, while other linear operations (matrix multiplication) are performed on the GPU after encryption and are securely restored within the enclave. We implement a prototype of SecureInfer using the LLaMA-2 model and evaluate it across performance and security metrics. Our results show that SecureInfer offers strong security guarantees with reasonable performance, offering a practical solution for secure on-device model inference.
Similar Papers
Confidential LLM Inference: Performance and Cost Across CPU and GPU TEEs
Performance
Keeps private AI information safe during use.
Towards Confidential and Efficient LLM Inference with Dual Privacy Protection
Cryptography and Security
Keeps your private data safe during AI use.
TZ-LLM: Protecting On-Device Large Language Models with Arm TrustZone
Cryptography and Security
Keeps smart phone AI secrets safe from hackers.