Quantifying Privacy Leakage in Split Inference via Fisher-Approximated Shannon Information Analysis
By: Ruijun Deng, Zhihui Lu, Qiang Duan
Potential Business Impact:
Protects secrets when computers learn together.
Split inference (SI) partitions deep neural networks into distributed sub-models, enabling privacy-preserving collaborative learning. Nevertheless, it remains vulnerable to Data Reconstruction Attacks (DRAs), wherein adversaries exploit exposed smashed data to reconstruct raw inputs. Despite extensive research on adversarial attack-defense games, a shortfall remains in the fundamental analysis of privacy risks. This paper establishes a theoretical framework for privacy leakage quantification using information theory, defining it as the adversary's certainty and deriving both average-case and worst-case error bounds. We introduce Fisher-approximated Shannon information (FSInfo), a novel privacy metric utilizing Fisher Information (FI) for operational privacy leakage computation. We empirically show that our privacy metric correlates well with empirical attacks and investigate some of the factors that affect privacy leakage, namely the data distribution, model size, and overfitting.
Similar Papers
InfoDecom: Decomposing Information for Defending against Privacy Leakage in Split Inference
Cryptography and Security
Keeps your private data safe when using AI.
Revisiting the Privacy Risks of Split Inference: A GAN-Based Data Reconstruction Attack via Progressive Feature Optimization
CV and Pattern Recognition
Steals private data from split computer tasks.
Technical note on Fisher Information for Robust Federated Cross-Validation
Machine Learning (CS)
Fixes AI learning when data is spread out.