Privacy Mechanism Design based on Empirical Distributions
By: Leonhard Grosse , Sara Saeidian , Mikael Skoglund and more
Potential Business Impact:
Protects private data even when its source is unknown.
Pointwise maximal leakage (PML) is a per-outcome privacy measure based on threat models from quantitative information flow. Privacy guarantees with PML rely on knowledge about the distribution that generated the private data. In this work, we propose a framework for PML privacy assessment and mechanism design with empirical estimates of this data-generating distribution. By extending the PML framework to consider sets of data-generating distributions, we arrive at bounds on the worst-case leakage within a given set. We use these bounds alongside large-deviation bounds from the literature to provide a method for obtaining distribution-independent $(\varepsilon,\delta)$-PML guarantees when the data-generating distribution is estimated from available data samples. We provide an optimal binary mechanism, and show that mechanism design with this type of uncertainty about the data-generating distribution reduces to a linearly constrained convex program. Further, we show that optimal mechanisms designed for a distribution estimate can be used. Finally, we apply these tools to leakage assessment of the Laplace mechanism and the Gaussian mechanism for binary private data, and numerically show that the presented approach to mechanism design can yield significant utility increase compared to local differential privacy, while retaining similar privacy guarantees.
Similar Papers
Privacy Guarantee for Nash Equilibrium Computation of Aggregative Games Based on Pointwise Maximal Leakage
CS and Game Theory
Protects secrets better than old methods.
Evaluating Differential Privacy on Correlated Datasets Using Pointwise Maximal Leakage
Cryptography and Security
Makes private data less safe with linked information.
Context-aware Privacy Bounds for Linear Queries
Information Theory
Makes private data sharing safer with less guessing.