Comparison of Fully Homomorphic Encryption and Garbled Circuit Techniques in Privacy-Preserving Machine Learning Inference
By: Kalyan Cheerla, Lotfi Ben Othmane, Kirill Morozov
Potential Business Impact:
Keeps your private data safe during computer learning.
Machine Learning (ML) is making its way into fields such as healthcare, finance, and Natural Language Processing (NLP), and concerns over data privacy and model confidentiality continue to grow. Privacy-preserving Machine Learning (PPML) addresses this challenge by enabling inference on private data without revealing sensitive inputs or proprietary models. Leveraging Secure Computation techniques from Cryptography, two widely studied approaches in this domain are Fully Homomorphic Encryption (FHE) and Garbled Circuits (GC). This work presents a comparative evaluation of FHE and GC for secure neural network inference. A two-layer neural network (NN) was implemented using the CKKS scheme from the Microsoft SEAL library (FHE) and the TinyGarble2.0 framework (GC) by IntelLabs. Both implementations are evaluated under the semi-honest threat model, measuring inference output error, round-trip time, peak memory usage, communication overhead, and communication rounds. Results reveal a trade-off: modular GC offers faster execution and lower memory consumption, while FHE supports non-interactive inference.
Similar Papers
HHEML: Hybrid Homomorphic Encryption for Privacy-Preserving Machine Learning on Edge
Cryptography and Security
Makes computers learn secrets without seeing them.
Design and Optimization of Cloud Native Homomorphic Encryption Workflows for Privacy-Preserving ML Inference
Cryptography and Security
Keeps your private data safe during computer learning.
Network and Compiler Optimizations for Efficient Linear Algebra Kernels in Private Transformer Inference
Cryptography and Security
Keeps your private AI chats secret from others.