Unifying Laplace Mechanism with Instance Optimality in Differential Privacy
By: David Durfee
Potential Business Impact:
Protects private data while still being useful.
We adapt the canonical Laplace mechanism, widely used in differentially private data analysis, to achieve near instance optimality with respect to the hardness of the underlying dataset. In particular, we construct a piecewise Laplace distribution whereby we defy traditional assumptions and show that Laplace noise can in fact be drawn proportional to the local sensitivity when done in a piecewise manner. While it may initially seem counterintuitive that this satisfies (pure) differential privacy and can be sampled, we provide both through a simple connection to the exponential mechanism and inverse sensitivity along with the fact that the Laplace distribution is a two-sided exponential distribution. As a result, we prove that in the continuous setting our \textit{piecewise Laplace mechanism} strictly dominates the inverse sensitivity mechanism, which was previously shown to both be nearly instance optimal and uniformly outperform the smooth sensitivity framework. Furthermore, in the worst-case where all local sensitivities equal the global sensitivity, our method simply reduces to a Laplace mechanism. We also complement this with an approximate local sensitivity variant to potentially ease the computational cost, which can also extend to higher dimensions.
Similar Papers
Beyond Laplace and Gaussian: Exploring the Generalized Gaussian Mechanism for Private Machine Learning
Machine Learning (CS)
Makes private data analysis more accurate and faster.
A Failure-Free and Efficient Discrete Laplace Distribution for Differential Privacy in MPC
Cryptography and Security
Keeps private data secret even after calculations.
Privately Estimating Black-Box Statistics
Cryptography and Security
Protects private data when using unknown computer programs.