Revisiting the Privacy Risks of Split Inference: A GAN-Based Data Reconstruction Attack via Progressive Feature Optimization
By: Yixiang Qiu , Yanhan Liu , Hongyao Yu and more
Potential Business Impact:
Steals private data from split computer tasks.
The growing complexity of Deep Neural Networks (DNNs) has led to the adoption of Split Inference (SI), a collaborative paradigm that partitions computation between edge devices and the cloud to reduce latency and protect user privacy. However, recent advances in Data Reconstruction Attacks (DRAs) reveal that intermediate features exchanged in SI can be exploited to recover sensitive input data, posing significant privacy risks. Existing DRAs are typically effective only on shallow models and fail to fully leverage semantic priors, limiting their reconstruction quality and generalizability across datasets and model architectures. In this paper, we propose a novel GAN-based DRA framework with Progressive Feature Optimization (PFO), which decomposes the generator into hierarchical blocks and incrementally refines intermediate representations to enhance the semantic fidelity of reconstructed images. To stabilize the optimization and improve image realism, we introduce an L1-ball constraint during reconstruction. Extensive experiments show that our method outperforms prior attacks by a large margin, especially in high-resolution scenarios, out-of-distribution settings, and against deeper and more complex DNNs.
Similar Papers
What Your Features Reveal: Data-Efficient Black-Box Feature Inversion Attack for Split DNNs
CV and Pattern Recognition
Reveals how hackers can steal private data from smart devices.
InfoDecom: Decomposing Information for Defending against Privacy Leakage in Split Inference
Cryptography and Security
Keeps your private data safe when using AI.
From Split to Share: Private Inference with Distributed Feature Sharing
Machine Learning (CS)
Keeps your private data safe during AI analysis.