Exploiting Unstructured Sparsity in Fully Homomorphic Encrypted DNNs
By: Aidan Ferguson , Perry Gibson , Lara D'Agata and more
Potential Business Impact:
Makes secret computer math much faster.
The deployment of deep neural networks (DNNs) in privacy-sensitive environments is constrained by computational overheads in fully homomorphic encryption (FHE). This paper explores unstructured sparsity in FHE matrix multiplication schemes as a means of reducing this burden while maintaining model accuracy requirements. We demonstrate that sparsity can be exploited in arbitrary matrix multiplication, providing runtime benefits compared to a baseline naive algorithm at all sparsity levels. This is a notable departure from the plaintext domain, where there is a trade-off between sparsity and the overhead of the sparse multiplication algorithm. In addition, we propose three sparse multiplication schemes in FHE based on common plaintext sparse encodings. We demonstrate the performance gain is scheme-invariant; however, some sparse schemes vastly reduce the memory storage requirements of the encrypted matrix at high sparsity values. Our proposed sparse schemes yield an average performance gain of 2.5x at 50% unstructured sparsity, with our multi-threading scheme providing a 32.5x performance increase over the equivalent single-threaded sparse computation when utilizing 64 cores.
Similar Papers
Efficient Privacy-Preserving Recommendation on Sparse Data using Fully Homomorphic Encryption
Cryptography and Security
Keeps your private data safe for movie suggestions.
FicGCN: Unveiling the Homomorphic Encryption Efficiency from Irregular Graph Convolutional Networks
Cryptography and Security
Keeps private data safe during smart computer learning.
Measuring Computational Universality of Fully Homomorphic Encryption
Cryptography and Security
Lets computers do math on secret information.