Bayesian Inference of Training Dataset Membership
By: Yongchao Huang
Potential Business Impact:
Finds if your private data was used in AI.
Determining whether a dataset was part of a machine learning model's training data pool can reveal privacy vulnerabilities, a challenge often addressed through membership inference attacks (MIAs). Traditional MIAs typically require access to model internals or rely on computationally intensive shadow models. This paper proposes an efficient, interpretable and principled Bayesian inference method for membership inference. By analyzing post-hoc metrics such as prediction error, confidence (entropy), perturbation magnitude, and dataset statistics from a trained ML model, our approach computes posterior probabilities of membership without requiring extensive model training. Experimental results on synthetic datasets demonstrate the method's effectiveness in distinguishing member from non-member datasets. Beyond membership inference, this method can also detect distribution shifts, offering a practical and interpretable alternative to existing approaches.
Similar Papers
Efficient Membership Inference Attacks by Bayesian Neural Network
Machine Learning (CS)
Finds if your private info was in AI training.
Evaluating the Dynamics of Membership Privacy in Deep Learning
Machine Learning (CS)
Finds how private data gets exposed during AI training.
Membership Inference Attacks Beyond Overfitting
Cryptography and Security
Protects private data used to train smart programs.