Peregrine: One-Shot Fine-Tuning for FHE Inference of General Deep CNNs
By: Huaming Ling , Ying Wang , Si Chen and more
Potential Business Impact:
Lets computers see images privately and accurately.
We address two fundamental challenges in adapting general deep CNNs for FHE-based inference: approximating non-linear activations such as ReLU with low-degree polynomials while minimizing accuracy degradation, and overcoming the ciphertext capacity barrier that constrains high-resolution image processing on FHE inference. Our contributions are twofold: (1) a single-stage fine-tuning (SFT) strategy that directly converts pre-trained CNNs into FHE-friendly forms using low-degree polynomials, achieving competitive accuracy with minimal training overhead; and (2) a generalized interleaved packing (GIP) scheme that is compatible with feature maps of virtually arbitrary spatial resolutions, accompanied by a suite of carefully designed homomorphic operators that preserve the GIP-form encryption throughout computation. These advances enable efficient, end-to-end FHE inference across diverse CNN architectures. Experiments on CIFAR-10, ImageNet, and MS COCO demonstrate that the FHE-friendly CNNs obtained via our SFT strategy achieve accuracy comparable to baselines using ReLU or SiLU activations. Moreover, this work presents the first demonstration of FHE-based inference for YOLO architectures in object detection leveraging low-degree polynomial activations.
Similar Papers
FastFHE: Packing-Scalable and Depthwise-Separable CNN Inference Over FHE
Cryptography and Security
Speeds up AI that works on secret data.
Privacy-Preserving CNN Training with Transfer Learning: Two Hidden Layers
Cryptography and Security
Trains computers on secret data without seeing it.
InstantFT: An FPGA-Based Runtime Subsecond Fine-tuning of CNN Models
Machine Learning (CS)
Makes smart devices learn new things super fast.