Adaptive Privacy Budgeting
By: Yuting Liang, Ke Yi
Potential Business Impact:
Protects your data while still letting computers learn.
We study the problem of adaptive privacy budgeting under generalized differential privacy. Consider the setting where each user $i\in [n]$ holds a tuple $x_i\in U:=U_1\times \dotsb \times U_T$, where $x_i(l)\in U_l$ represents the $l$-th component of their data. For every $l\in [T]$ (or a subset), an untrusted analyst wishes to compute some $f_l(x_1(l),\dots,x_n(l))$, while respecting the privacy of each user. For many functions $f_l$, data from the users are not all equally important, and there is potential to use the privacy budgets of the users strategically, leading to privacy savings that can be used to improve the utility of later queries. In particular, the budgeting should be adaptive to the outputs of previous queries, so that greater savings can be achieved on more typical instances. In this paper, we provide such an adaptive budgeting framework, with various applications demonstrating its applicability.
Similar Papers
A General Framework for Per-record Differential Privacy
Databases
Protects private data better when needs differ.
Differentially Private Federated Learning With Time-Adaptive Privacy Spending
Machine Learning (CS)
Learns more from private data, faster.
Setting $\varepsilon$ is not the Issue in Differential Privacy
Cryptography and Security
Makes privacy protection in computers easier to understand.