Network and Compiler Optimizations for Efficient Linear Algebra Kernels in Private Transformer Inference
By: Karthik Garimella , Negar Neda , Austin Ebel and more
Potential Business Impact:
Keeps your private AI chats secret from others.
Large language model (LLM) based services are primarily structured as client-server interactions, with clients sending queries directly to cloud providers that host LLMs. This approach currently compromises data privacy as all queries must be processed in the cloud and in the clear. Fully Homomorphic Encryption (FHE) is a solution to this data privacy issue by enabling computations directly upon encrypted queries. However, running encrypted transformer inference is challenging as programmers must map standard kernels to the constrained instruction set provided by FHE. In this work, we explore implementations of linear algebra kernels needed for transformer inference in FHE and understand how network optimization can help mitigate FHE costs while remaining performant. We leverage the Orion PyTorch to FHE framework to benchmark several linear algebra kernels in order to profile two linear transformation methods, packed row and BSGS, and find that BSGS outperforms packed row methods by up to $13.7 \times$ at transformer-level scales. We also incorporate network-level pruning strategies that reduce FHE runtimes of feed forward layers by up to $11.46\times$. Furthermore, we extend Orion to include ciphertext-ciphertext matrix-matrix products, a key component in the self-attention blocks. Finally, we perform a roofline analysis of FHE primitives and encrypted linear transformations and find that (SIMD encoded) implementations are memory-bound with primitives having roughly $0.1$ integer operations per byte of DRAM traffic. These findings illustrate the need for exploring alternative encoding schemes and models of computation within CKKS to unlock scalable private transformer inference. We conduct all experiments using the Orion framework which can be found at: https://github.com/baahl-nyu/orion.
Similar Papers
A Scalable Multi-GPU Framework for Encrypted Large-Model Inference
Cryptography and Security
Lets AI learn secrets without seeing them.
FastFHE: Packing-Scalable and Depthwise-Separable CNN Inference Over FHE
Cryptography and Security
Speeds up AI that works on secret data.
Practical and Private Hybrid ML Inference with Fully Homomorphic Encryption
Cryptography and Security
Keeps secrets safe while computers do math.