How Worrying Are Privacy Attacks Against Machine Learning?
By: Josep Domingo-Ferrer
Potential Business Impact:
Protects personal information used to train AI.
In several jurisdictions, the regulatory framework on the release and sharing of personal data is being extended to machine learning (ML). The implicit assumption is that disclosing a trained ML model entails a privacy risk for any personal data used in training comparable to directly releasing those data. However, given a trained model, it is necessary to mount a privacy attack to make inferences on the training data. In this concept paper, we examine the main families of privacy attacks against predictive and generative ML, including membership inference attacks (MIAs), property inference attacks, and reconstruction attacks. Our discussion shows that most of these attacks seem less effective in the real world than what a prima face interpretation of the related literature could suggest.
Similar Papers
Evaluating the Dynamics of Membership Privacy in Deep Learning
Machine Learning (CS)
Protects private data used to train AI.
Membership Inference Attacks Beyond Overfitting
Cryptography and Security
Protects private data used to train smart programs.
Position: Privacy Is Not Just Memorization!
Cryptography and Security
Protects your secrets from smart computer programs.