Efficient Private Inference Based on Helper-Assisted Malicious Security Dishonest Majority MPC
By: Kaiwen Wang , Xiaolin Chang , Junchao Fan and more
Potential Business Impact:
Lets computers learn secrets without seeing them.
The existing MPC-based private inference frameworks either rely on impractical real-world assumptions, or adopt the strongest security model (Malicious Security Dishonest Majority, MSDM) and then suffer from severe efficiency limitations. To balance security and efficiency, we propose a novel, three-layer private inference framework based on the Helper-Assisted MSDM (HA-MSDM) model. The first is the primitive layer, where we extend computations from prime fields to rings for efficient fixed-point arithmetic and then better support inference operations. The second is the MPC layer, where we design six fixed-round MPC protocols to reduce latency for core operations like multiplication, polynomial evaluation, and batch check. The third is the inference layer, which can achieve efficient and high-accuracy CNN inference. The efficiency is achieved by applying our designed MPC protocols. The high-accuracy private inference in deep CNNs is achieved by designing a co-optimized strategy, which employs high-precision polynomial approximation for activation functions and uses parameter-adjusted Batch Normalization layers to constrain inputs. Benchmarks on LeNet and AlexNet show our framework achieves up to a 2.4-25.7x speedup in LAN and a 1.3-9.5x acceleration in WAN over the state-of-the-art MSDM frameworks with only 0.04-1.08% relative error.
Similar Papers
Robust and Verifiable MPC with Applications to Linear Machine Learning Inference
Cryptography and Security
Finds bad guys in secret computer math.
Privacy-Preserving Inference for Quantized BERT Models
Machine Learning (CS)
Keeps your private data safe during AI use.
Breaking the Layer Barrier: Remodeling Private Transformer Inference with Hybrid CKKS and MPC
Cryptography and Security
Keeps your computer secrets safe during calculations.