InfoDecom: Decomposing Information for Defending against Privacy Leakage in Split Inference
By: Ruijun Deng, Zhihui Lu, Qiang Duan
Potential Business Impact:
Keeps your private data safe when using AI.
Split inference (SI) enables users to access deep learning (DL) services without directly transmitting raw data. However, recent studies reveal that data reconstruction attacks (DRAs) can recover the original inputs from the smashed data sent from the client to the server, leading to significant privacy leakage. While various defenses have been proposed, they often result in substantial utility degradation, particularly when the client-side model is shallow. We identify a key cause of this trade-off: existing defenses apply excessive perturbation to redundant information in the smashed data. To address this issue in computer vision tasks, we propose InfoDecom, a defense framework that first decomposes and removes redundant information and then injects noise calibrated to provide theoretically guaranteed privacy. Experiments demonstrate that InfoDecom achieves a superior utility-privacy trade-off compared to existing baselines. The code and the appendix are available at https://github.com/SASA-cloud/InfoDecom.
Similar Papers
Quantifying Privacy Leakage in Split Inference via Fisher-Approximated Shannon Information Analysis
Cryptography and Security
Protects secrets when computers learn together.
Revisiting the Privacy Risks of Split Inference: A GAN-Based Data Reconstruction Attack via Progressive Feature Optimization
CV and Pattern Recognition
Steals private data from split computer tasks.
How Breakable Is Privacy: Probing and Resisting Model Inversion Attacks in Collaborative Inference
Cryptography and Security
Protects private data sent from phones to computers.