How Well Can Differential Privacy Be Audited in One Run?
By: Amit Keinan, Moshe Shenfeld, Katrina Ligett
Potential Business Impact:
Makes computer privacy checks faster and more accurate.
Recent methods for auditing the privacy of machine learning algorithms have improved computational efficiency by simultaneously intervening on multiple training examples in a single training run. Steinke et al. (2024) prove that one-run auditing indeed lower bounds the true privacy parameter of the audited algorithm, and give impressive empirical results. Their work leaves open the question of how precisely one-run auditing can uncover the true privacy parameter of an algorithm, and how that precision depends on the audited algorithm. In this work, we characterize the maximum achievable efficacy of one-run auditing and show that the key barrier to its efficacy is interference between the observable effects of different data elements. We present new conceptual approaches to minimize this barrier, towards improving the performance of one-run auditing of real machine learning algorithms.
Similar Papers
Privacy Audit as Bits Transmission: (Im)possibilities for Audit by One Run
Cryptography and Security
Checks if computer programs keep secrets safe.
Tight Privacy Audit in One Run
Cryptography and Security
Checks if private data stays private.
Monitoring Violations of Differential Privacy over Time
Cryptography and Security
Keeps private information safe as apps update.