Mitigating Data Exfiltration Attacks through Layer-Wise Learning Rate Decay Fine-Tuning
By: Elie Thellier , Huiyu Li , Nicholas Ayache and more
Potential Business Impact:
Protects patient pictures from being stolen from computers.
Data lakes enable the training of powerful machine learning models on sensitive, high-value medical datasets, but also introduce serious privacy risks due to potential leakage of protected health information. Recent studies show adversaries can exfiltrate training data by embedding latent representations into model parameters or inducing memorization via multi-task learning. These attacks disguise themselves as benign utility models while enabling reconstruction of high-fidelity medical images, posing severe privacy threats with legal and ethical implications. In this work, we propose a simple yet effective mitigation strategy that perturbs model parameters at export time through fine-tuning with a decaying layer-wise learning rate to corrupt embedded data without degrading task performance. Evaluations on DermaMNIST, ChestMNIST, and MIMIC-CXR show that our approach maintains utility task performance, effectively disrupts state-of-the-art exfiltration attacks, outperforms prior defenses, and renders exfiltrated data unusable for training. Ablations and discussions on adaptive attacks highlight challenges and future directions. Our findings offer a practical defense against data leakage in data lake-trained models and centralized federated learning.
Similar Papers
Data Exfiltration by Compression Attack: Definition and Evaluation on Medical Image Data
Cryptography and Security
Steals patient scans from computer systems.
Tuning for Two Adversaries: Enhancing the Robustness Against Transfer and Query-Based Attacks using Hyperparameter Tuning
Machine Learning (CS)
Makes AI safer from hackers by changing settings.
Assessing and Mitigating Data Memorization Risks in Fine-Tuned Large Language Models
Computation and Language
Keeps private info safe when computers learn.