Score: 1

How Well Can Differential Privacy Be Audited in One Run?

Published: March 10, 2025 | arXiv ID: 2503.07199v2

By: Amit Keinan, Moshe Shenfeld, Katrina Ligett

Potential Business Impact:

Makes computer privacy checks faster and more accurate.

Business Areas:
A/B Testing Data and Analytics

Recent methods for auditing the privacy of machine learning algorithms have improved computational efficiency by simultaneously intervening on multiple training examples in a single training run. Steinke et al. (2024) prove that one-run auditing indeed lower bounds the true privacy parameter of the audited algorithm, and give impressive empirical results. Their work leaves open the question of how precisely one-run auditing can uncover the true privacy parameter of an algorithm, and how that precision depends on the audited algorithm. In this work, we characterize the maximum achievable efficacy of one-run auditing and show that the key barrier to its efficacy is interference between the observable effects of different data elements. We present new conceptual approaches to minimize this barrier, towards improving the performance of one-run auditing of real machine learning algorithms.

Country of Origin
🇮🇱 Israel

Repos / Data Links

Page Count
36 pages

Category
Computer Science:
Machine Learning (CS)