From Split to Share: Private Inference with Distributed Feature Sharing
By: Zihan Liu , Jiayi Wen , Shouhong Tan and more
Potential Business Impact:
Keeps your private data safe during AI analysis.
Cloud-based Machine Learning as a Service (MLaaS) raises serious privacy concerns when handling sensitive client data. Existing Private Inference (PI) methods face a fundamental trade-off between privacy and efficiency: cryptographic approaches offer strong protection but incur high computational overhead, while efficient alternatives such as split inference expose intermediate features to inversion attacks. We propose PrivDFS, a new paradigm for private inference that replaces a single exposed representation with distributed feature sharing. PrivDFS partitions input features on the client into multiple balanced shares, which are distributed to non-colluding, non-communicating servers for independent partial inference. The client securely aggregates the servers' outputs to reconstruct the final prediction, ensuring that no single server observes sufficient information to compromise input privacy. To further strengthen privacy, we propose two key extensions: PrivDFS-AT, which uses adversarial training with a diffusion-based proxy attacker to enforce inversion-resistant feature partitioning, and PrivDFS-KD, which leverages user-specific keys to diversify partitioning policies and prevent query-based inversion generalization. Experiments on CIFAR-10 and CelebA demonstrate that PrivDFS achieves privacy comparable to deep split inference while cutting client computation by up to 100 times with no accuracy loss, and that the extensions remain robust against both diffusion-based in-distribution and adaptive attacks.
Similar Papers
What Your Features Reveal: Data-Efficient Black-Box Feature Inversion Attack for Split DNNs
CV and Pattern Recognition
Reveals how hackers can steal private data from smart devices.
Revisiting the Privacy Risks of Split Inference: A GAN-Based Data Reconstruction Attack via Progressive Feature Optimization
CV and Pattern Recognition
Steals private data from split computer tasks.
PRIVEE: Privacy-Preserving Vertical Federated Learning Against Feature Inference Attacks
Machine Learning (CS)
Keeps private data safe during shared learning.