Enhancing One-run Privacy Auditing with Quantile Regression-Based Membership Inference
By: Terrance Liu , Matteo Boglioni , Yiwei Fu and more
Potential Business Impact:
Checks computer privacy without needing many tries.
Differential privacy (DP) auditing aims to provide empirical lower bounds on the privacy guarantees of DP mechanisms like DP-SGD. While some existing techniques require many training runs that are prohibitively costly, recent work introduces one-run auditing approaches that effectively audit DP-SGD in white-box settings while still being computationally efficient. However, in the more practical black-box setting where gradients cannot be manipulated during training and only the last model iterate is observed, prior work shows that there is still a large gap between the empirical lower bounds and theoretical upper bounds. Consequently, in this work, we study how incorporating approaches for stronger membership inference attacks (MIA) can improve one-run auditing in the black-box setting. Evaluating on image classification models trained on CIFAR-10 with DP-SGD, we demonstrate that our proposed approach, which utilizes quantile regression for MIA, achieves tighter bounds while crucially maintaining the computational efficiency of one-run methods.
Similar Papers
How Well Can Differential Privacy Be Audited in One Run?
Machine Learning (CS)
Makes computer privacy checks faster and more accurate.
Empirical Calibration and Metric Differential Privacy in Language Models
Machine Learning (CS)
Protects private text data better in AI.
Sequentially Auditing Differential Privacy
Cryptography and Security
Checks if private data stays secret faster.