Efficient Public Verification of Private ML via Regularization
By: Zoë Ruha Bell , Anvith Thudi , Olive Franzese-McLaughlin and more
Potential Business Impact:
Lets you check if private data stays private.
Training with differential privacy (DP) provides a guarantee to members in a dataset that they cannot be identified by users of the released model. However, those data providers, and, in general, the public, lack methods to efficiently verify that models trained on their data satisfy DP guarantees. The amount of compute needed to verify DP guarantees for current algorithms scales with the amount of compute required to train the model. In this paper we design the first DP algorithm with near optimal privacy-utility trade-offs but whose DP guarantees can be verified cheaper than training. We focus on DP stochastic convex optimization (DP-SCO), where optimal privacy-utility trade-offs are known. Here we show we can obtain tight privacy-utility trade-offs by privately minimizing a series of regularized objectives and only using the standard DP composition bound. Crucially, this method can be verified with much less compute than training. This leads to the first known DP-SCO algorithm with near optimal privacy-utility whose DP verification scales better than training cost, significantly reducing verification costs on large datasets.
Similar Papers
An Interactive Framework for Finding the Optimal Trade-off in Differential Privacy
Machine Learning (CS)
Finds best privacy for data without losing accuracy.
Graph Structure Learning with Privacy Guarantees for Open Graph Data
Machine Learning (CS)
Keeps private info safe when sharing data.
Computational Attestations of Polynomial Integrity Towards Verifiable Machine-Learning
Cryptography and Security
Proves private computer learning is fast and safe.