Setting $\varepsilon$ is not the Issue in Differential Privacy
By: Edwige Cyffers
Potential Business Impact:
Makes privacy protection in computers easier to understand.
This position paper argues that setting the privacy budget in differential privacy should not be viewed as an important limitation of differential privacy compared to alternative methods for privacy-preserving machine learning. The so-called problem of interpreting the privacy budget is often presented as a major hindrance to the wider adoption of differential privacy in real-world deployments and is sometimes used to promote alternative mitigation techniques for data protection. We believe this misleads decision-makers into choosing unsafe methods. We argue that the difficulty in interpreting privacy budgets does not stem from the definition of differential privacy itself, but from the intrinsic difficulty of estimating privacy risks in context, a challenge that any rigorous method for privacy risk assessment face. Moreover, we claim that any sound method for estimating privacy risks should, given the current state of research, be expressible within the differential privacy framework or justify why it cannot.
Similar Papers
Adaptive Privacy Budgeting
Cryptography and Security
Protects your data while still letting computers learn.
Exploring the Integration of Differential Privacy in Cybersecurity Analytics: Balancing Data Utility and Privacy in Threat Intelligence
Cryptography and Security
Keeps secret computer attack clues safe.
Comparing privacy notions for protection against reconstruction attacks in machine learning
Machine Learning (CS)
Compares privacy methods for safer AI learning.