Lower Bounds on the MMSE of Adversarially Inferring Sensitive Features
By: Monica Welfert , Nathan Stromberg , Mario Diaz and more
Potential Business Impact:
Protects private info from being guessed.
We propose an adversarial evaluation framework for sensitive feature inference based on minimum mean-squared error (MMSE) estimation with a finite sample size and linear predictive models. Our approach establishes theoretical lower bounds on the true MMSE of inferring sensitive features from noisy observations of other correlated features. These bounds are expressed in terms of the empirical MMSE under a restricted hypothesis class and a non-negative error term. The error term captures both the estimation error due to finite number of samples and the approximation error from using a restricted hypothesis class. For linear predictive models, we derive closed-form bounds, which are order optimal in terms of the noise variance, on the approximation error for several classes of relationships between the sensitive and non-sensitive features, including linear mappings, binary symmetric channels, and class-conditional multi-variate Gaussian distributions. We also present a new lower bound that relies on the MSE computed on a hold-out validation dataset of the MMSE estimator learned on finite-samples and a restricted hypothesis class. Through empirical evaluation, we demonstrate that our framework serves as an effective tool for MMSE-based adversarial evaluation of sensitive feature inference that balances theoretical guarantees with practical efficiency.
Similar Papers
On the Sample Complexity of Learning for Blind Inverse Problems
Machine Learning (CS)
Teaches computers to fix blurry pictures.
One-Bit Distributed Mean Estimation with Unknown Variance
Information Theory
Helps computers guess average with tiny messages.
Finite-Sample Properties of Generalized Ridge Estimators in Nonlinear Models
Methodology
Improves computer predictions by balancing accuracy and simplicity.